00:00:00.000 Started by upstream project "autotest-per-patch" build number 130557 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.070 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.074 The recommended git tool is: git 00:00:00.075 using credential 00000000-0000-0000-0000-000000000002 00:00:00.076 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.127 Fetching changes from the remote Git repository 00:00:00.129 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.186 Using shallow fetch with depth 1 00:00:00.186 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.186 > git --version # timeout=10 00:00:00.241 > git --version # 'git version 2.39.2' 00:00:00.241 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.271 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.271 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.293 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.313 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.342 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:06.343 > git config core.sparsecheckout # timeout=10 00:00:06.372 > git read-tree -mu HEAD # timeout=10 00:00:06.400 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:06.426 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:06.426 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:06.507 [Pipeline] Start of Pipeline 00:00:06.520 [Pipeline] library 00:00:06.522 Loading library shm_lib@master 00:00:06.522 Library shm_lib@master is cached. Copying from home. 00:00:06.535 [Pipeline] node 00:00:06.548 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.549 [Pipeline] { 00:00:06.583 [Pipeline] catchError 00:00:06.585 [Pipeline] { 00:00:06.595 [Pipeline] wrap 00:00:06.603 [Pipeline] { 00:00:06.609 [Pipeline] stage 00:00:06.610 [Pipeline] { (Prologue) 00:00:06.622 [Pipeline] echo 00:00:06.624 Node: VM-host-SM9 00:00:06.628 [Pipeline] cleanWs 00:00:06.634 [WS-CLEANUP] Deleting project workspace... 00:00:06.634 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.638 [WS-CLEANUP] done 00:00:06.834 [Pipeline] setCustomBuildProperty 00:00:06.977 [Pipeline] httpRequest 00:00:07.338 [Pipeline] echo 00:00:07.339 Sorcerer 10.211.164.101 is alive 00:00:07.345 [Pipeline] retry 00:00:07.347 [Pipeline] { 00:00:07.370 [Pipeline] httpRequest 00:00:07.387 HttpMethod: GET 00:00:07.391 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:07.393 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:07.398 Response Code: HTTP/1.1 200 OK 00:00:07.401 Success: Status code 200 is in the accepted range: 200,404 00:00:07.406 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:09.978 [Pipeline] } 00:00:09.995 [Pipeline] // retry 00:00:10.003 [Pipeline] sh 00:00:10.287 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:10.303 [Pipeline] httpRequest 00:00:10.651 [Pipeline] echo 00:00:10.652 Sorcerer 10.211.164.101 is alive 00:00:10.661 [Pipeline] retry 00:00:10.663 [Pipeline] { 00:00:10.675 [Pipeline] httpRequest 00:00:10.678 HttpMethod: GET 00:00:10.679 URL: http://10.211.164.101/packages/spdk_7b38c9ede93025f415b7652489344a4dd937aed1.tar.gz 00:00:10.680 Sending request to url: http://10.211.164.101/packages/spdk_7b38c9ede93025f415b7652489344a4dd937aed1.tar.gz 00:00:10.690 Response Code: HTTP/1.1 200 OK 00:00:10.690 Success: Status code 200 is in the accepted range: 200,404 00:00:10.691 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_7b38c9ede93025f415b7652489344a4dd937aed1.tar.gz 00:00:39.800 [Pipeline] } 00:00:39.816 [Pipeline] // retry 00:00:39.823 [Pipeline] sh 00:00:40.102 + tar --no-same-owner -xf spdk_7b38c9ede93025f415b7652489344a4dd937aed1.tar.gz 00:00:42.643 [Pipeline] sh 00:00:42.923 + git -C spdk log --oneline -n5 00:00:42.923 7b38c9ede bdev/nvme: changed default config to multipath 00:00:42.923 fefe29c8c bdev/nvme: ctrl config consistency check 00:00:42.923 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:00:42.923 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:00:42.923 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:00:42.941 [Pipeline] writeFile 00:00:42.955 [Pipeline] sh 00:00:43.236 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:43.248 [Pipeline] sh 00:00:43.528 + cat autorun-spdk.conf 00:00:43.528 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.528 SPDK_TEST_NVMF=1 00:00:43.528 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:43.528 SPDK_TEST_URING=1 00:00:43.528 SPDK_TEST_USDT=1 00:00:43.528 SPDK_RUN_UBSAN=1 00:00:43.528 NET_TYPE=virt 00:00:43.528 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:43.535 RUN_NIGHTLY=0 00:00:43.537 [Pipeline] } 00:00:43.546 [Pipeline] // stage 00:00:43.558 [Pipeline] stage 00:00:43.560 [Pipeline] { (Run VM) 00:00:43.572 [Pipeline] sh 00:00:43.852 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:43.852 + echo 'Start stage prepare_nvme.sh' 00:00:43.852 Start stage prepare_nvme.sh 00:00:43.852 + [[ -n 2 ]] 00:00:43.852 + disk_prefix=ex2 00:00:43.852 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:43.852 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:43.852 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:43.852 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.852 ++ SPDK_TEST_NVMF=1 00:00:43.852 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:43.852 ++ SPDK_TEST_URING=1 00:00:43.852 ++ SPDK_TEST_USDT=1 00:00:43.852 ++ SPDK_RUN_UBSAN=1 00:00:43.852 ++ NET_TYPE=virt 00:00:43.852 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:43.852 ++ RUN_NIGHTLY=0 00:00:43.852 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:43.852 + nvme_files=() 00:00:43.852 + declare -A nvme_files 00:00:43.852 + backend_dir=/var/lib/libvirt/images/backends 00:00:43.852 + nvme_files['nvme.img']=5G 00:00:43.852 + nvme_files['nvme-cmb.img']=5G 00:00:43.852 + nvme_files['nvme-multi0.img']=4G 00:00:43.852 + nvme_files['nvme-multi1.img']=4G 00:00:43.852 + nvme_files['nvme-multi2.img']=4G 00:00:43.852 + nvme_files['nvme-openstack.img']=8G 00:00:43.852 + nvme_files['nvme-zns.img']=5G 00:00:43.852 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:43.852 + (( SPDK_TEST_FTL == 1 )) 00:00:43.852 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:43.852 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:43.852 + for nvme in "${!nvme_files[@]}" 00:00:43.852 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:43.852 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:43.852 + for nvme in "${!nvme_files[@]}" 00:00:43.852 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:43.852 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:43.852 + for nvme in "${!nvme_files[@]}" 00:00:43.852 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:43.852 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:43.852 + for nvme in "${!nvme_files[@]}" 00:00:43.852 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:43.852 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:43.852 + for nvme in "${!nvme_files[@]}" 00:00:43.852 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:43.852 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:43.852 + for nvme in "${!nvme_files[@]}" 00:00:43.852 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:43.852 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:43.852 + for nvme in "${!nvme_files[@]}" 00:00:43.852 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:44.428 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:44.428 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:44.428 + echo 'End stage prepare_nvme.sh' 00:00:44.428 End stage prepare_nvme.sh 00:00:44.450 [Pipeline] sh 00:00:44.727 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:44.727 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:00:44.727 00:00:44.727 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:44.727 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:44.727 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:44.727 HELP=0 00:00:44.727 DRY_RUN=0 00:00:44.727 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:44.727 NVME_DISKS_TYPE=nvme,nvme, 00:00:44.727 NVME_AUTO_CREATE=0 00:00:44.727 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:44.727 NVME_CMB=,, 00:00:44.727 NVME_PMR=,, 00:00:44.727 NVME_ZNS=,, 00:00:44.727 NVME_MS=,, 00:00:44.727 NVME_FDP=,, 00:00:44.727 SPDK_VAGRANT_DISTRO=fedora39 00:00:44.727 SPDK_VAGRANT_VMCPU=10 00:00:44.727 SPDK_VAGRANT_VMRAM=12288 00:00:44.727 SPDK_VAGRANT_PROVIDER=libvirt 00:00:44.727 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:44.727 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:44.727 SPDK_OPENSTACK_NETWORK=0 00:00:44.727 VAGRANT_PACKAGE_BOX=0 00:00:44.727 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:44.727 FORCE_DISTRO=true 00:00:44.727 VAGRANT_BOX_VERSION= 00:00:44.727 EXTRA_VAGRANTFILES= 00:00:44.727 NIC_MODEL=e1000 00:00:44.727 00:00:44.727 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:44.727 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:47.258 Bringing machine 'default' up with 'libvirt' provider... 00:00:47.824 ==> default: Creating image (snapshot of base box volume). 00:00:48.083 ==> default: Creating domain with the following settings... 00:00:48.083 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727789319_cabc618e79b23676cf8d 00:00:48.083 ==> default: -- Domain type: kvm 00:00:48.083 ==> default: -- Cpus: 10 00:00:48.083 ==> default: -- Feature: acpi 00:00:48.083 ==> default: -- Feature: apic 00:00:48.083 ==> default: -- Feature: pae 00:00:48.083 ==> default: -- Memory: 12288M 00:00:48.083 ==> default: -- Memory Backing: hugepages: 00:00:48.083 ==> default: -- Management MAC: 00:00:48.083 ==> default: -- Loader: 00:00:48.083 ==> default: -- Nvram: 00:00:48.083 ==> default: -- Base box: spdk/fedora39 00:00:48.083 ==> default: -- Storage pool: default 00:00:48.083 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727789319_cabc618e79b23676cf8d.img (20G) 00:00:48.083 ==> default: -- Volume Cache: default 00:00:48.083 ==> default: -- Kernel: 00:00:48.083 ==> default: -- Initrd: 00:00:48.083 ==> default: -- Graphics Type: vnc 00:00:48.083 ==> default: -- Graphics Port: -1 00:00:48.083 ==> default: -- Graphics IP: 127.0.0.1 00:00:48.083 ==> default: -- Graphics Password: Not defined 00:00:48.083 ==> default: -- Video Type: cirrus 00:00:48.083 ==> default: -- Video VRAM: 9216 00:00:48.083 ==> default: -- Sound Type: 00:00:48.083 ==> default: -- Keymap: en-us 00:00:48.083 ==> default: -- TPM Path: 00:00:48.083 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:48.083 ==> default: -- Command line args: 00:00:48.083 ==> default: -> value=-device, 00:00:48.083 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:48.083 ==> default: -> value=-drive, 00:00:48.083 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:48.083 ==> default: -> value=-device, 00:00:48.083 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:48.083 ==> default: -> value=-device, 00:00:48.083 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:48.083 ==> default: -> value=-drive, 00:00:48.084 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:48.084 ==> default: -> value=-device, 00:00:48.084 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:48.084 ==> default: -> value=-drive, 00:00:48.084 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:48.084 ==> default: -> value=-device, 00:00:48.084 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:48.084 ==> default: -> value=-drive, 00:00:48.084 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:48.084 ==> default: -> value=-device, 00:00:48.084 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:48.084 ==> default: Creating shared folders metadata... 00:00:48.084 ==> default: Starting domain. 00:00:49.462 ==> default: Waiting for domain to get an IP address... 00:01:07.595 ==> default: Waiting for SSH to become available... 00:01:07.595 ==> default: Configuring and enabling network interfaces... 00:01:10.129 default: SSH address: 192.168.121.27:22 00:01:10.129 default: SSH username: vagrant 00:01:10.129 default: SSH auth method: private key 00:01:12.030 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:20.138 ==> default: Mounting SSHFS shared folder... 00:01:21.071 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:21.071 ==> default: Checking Mount.. 00:01:22.447 ==> default: Folder Successfully Mounted! 00:01:22.447 ==> default: Running provisioner: file... 00:01:23.013 default: ~/.gitconfig => .gitconfig 00:01:23.615 00:01:23.615 SUCCESS! 00:01:23.616 00:01:23.616 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:23.616 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:23.616 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:23.616 00:01:23.624 [Pipeline] } 00:01:23.635 [Pipeline] // stage 00:01:23.644 [Pipeline] dir 00:01:23.645 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:23.647 [Pipeline] { 00:01:23.660 [Pipeline] catchError 00:01:23.661 [Pipeline] { 00:01:23.674 [Pipeline] sh 00:01:23.954 + vagrant ssh-config --host vagrant 00:01:23.954 + sed -ne /^Host/,$p 00:01:23.954 + tee ssh_conf 00:01:27.237 Host vagrant 00:01:27.237 HostName 192.168.121.27 00:01:27.237 User vagrant 00:01:27.237 Port 22 00:01:27.237 UserKnownHostsFile /dev/null 00:01:27.237 StrictHostKeyChecking no 00:01:27.237 PasswordAuthentication no 00:01:27.237 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:27.237 IdentitiesOnly yes 00:01:27.237 LogLevel FATAL 00:01:27.237 ForwardAgent yes 00:01:27.237 ForwardX11 yes 00:01:27.237 00:01:27.250 [Pipeline] withEnv 00:01:27.253 [Pipeline] { 00:01:27.266 [Pipeline] sh 00:01:27.545 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:27.546 source /etc/os-release 00:01:27.546 [[ -e /image.version ]] && img=$(< /image.version) 00:01:27.546 # Minimal, systemd-like check. 00:01:27.546 if [[ -e /.dockerenv ]]; then 00:01:27.546 # Clear garbage from the node's name: 00:01:27.546 # agt-er_autotest_547-896 -> autotest_547-896 00:01:27.546 # $HOSTNAME is the actual container id 00:01:27.546 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:27.546 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:27.546 # We can assume this is a mount from a host where container is running, 00:01:27.546 # so fetch its hostname to easily identify the target swarm worker. 00:01:27.546 container="$(< /etc/hostname) ($agent)" 00:01:27.546 else 00:01:27.546 # Fallback 00:01:27.546 container=$agent 00:01:27.546 fi 00:01:27.546 fi 00:01:27.546 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:27.546 00:01:27.815 [Pipeline] } 00:01:27.829 [Pipeline] // withEnv 00:01:27.836 [Pipeline] setCustomBuildProperty 00:01:27.850 [Pipeline] stage 00:01:27.852 [Pipeline] { (Tests) 00:01:27.870 [Pipeline] sh 00:01:28.151 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:28.423 [Pipeline] sh 00:01:28.700 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:28.970 [Pipeline] timeout 00:01:28.970 Timeout set to expire in 1 hr 0 min 00:01:28.972 [Pipeline] { 00:01:28.984 [Pipeline] sh 00:01:29.262 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:29.826 HEAD is now at 7b38c9ede bdev/nvme: changed default config to multipath 00:01:29.837 [Pipeline] sh 00:01:30.143 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:30.415 [Pipeline] sh 00:01:30.695 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:30.968 [Pipeline] sh 00:01:31.246 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:31.504 ++ readlink -f spdk_repo 00:01:31.504 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:31.504 + [[ -n /home/vagrant/spdk_repo ]] 00:01:31.504 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:31.504 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:31.504 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:31.504 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:31.504 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:31.504 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:31.504 + cd /home/vagrant/spdk_repo 00:01:31.504 + source /etc/os-release 00:01:31.504 ++ NAME='Fedora Linux' 00:01:31.504 ++ VERSION='39 (Cloud Edition)' 00:01:31.504 ++ ID=fedora 00:01:31.504 ++ VERSION_ID=39 00:01:31.504 ++ VERSION_CODENAME= 00:01:31.504 ++ PLATFORM_ID=platform:f39 00:01:31.504 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:31.504 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:31.504 ++ LOGO=fedora-logo-icon 00:01:31.504 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:31.504 ++ HOME_URL=https://fedoraproject.org/ 00:01:31.504 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:31.504 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:31.504 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:31.504 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:31.504 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:31.504 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:31.504 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:31.504 ++ SUPPORT_END=2024-11-12 00:01:31.504 ++ VARIANT='Cloud Edition' 00:01:31.504 ++ VARIANT_ID=cloud 00:01:31.504 + uname -a 00:01:31.504 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:31.504 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:31.762 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:31.762 Hugepages 00:01:31.762 node hugesize free / total 00:01:32.020 node0 1048576kB 0 / 0 00:01:32.020 node0 2048kB 0 / 0 00:01:32.020 00:01:32.020 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:32.020 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:32.020 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:32.020 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:32.020 + rm -f /tmp/spdk-ld-path 00:01:32.020 + source autorun-spdk.conf 00:01:32.020 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.020 ++ SPDK_TEST_NVMF=1 00:01:32.020 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.020 ++ SPDK_TEST_URING=1 00:01:32.020 ++ SPDK_TEST_USDT=1 00:01:32.020 ++ SPDK_RUN_UBSAN=1 00:01:32.020 ++ NET_TYPE=virt 00:01:32.020 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.020 ++ RUN_NIGHTLY=0 00:01:32.020 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:32.020 + [[ -n '' ]] 00:01:32.020 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:32.020 + for M in /var/spdk/build-*-manifest.txt 00:01:32.020 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:32.020 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:32.020 + for M in /var/spdk/build-*-manifest.txt 00:01:32.020 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:32.020 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:32.020 + for M in /var/spdk/build-*-manifest.txt 00:01:32.020 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:32.020 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:32.020 ++ uname 00:01:32.020 + [[ Linux == \L\i\n\u\x ]] 00:01:32.020 + sudo dmesg -T 00:01:32.020 + sudo dmesg --clear 00:01:32.020 + dmesg_pid=5262 00:01:32.020 + sudo dmesg -Tw 00:01:32.020 + [[ Fedora Linux == FreeBSD ]] 00:01:32.020 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.020 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.020 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:32.020 + [[ -x /usr/src/fio-static/fio ]] 00:01:32.020 + export FIO_BIN=/usr/src/fio-static/fio 00:01:32.020 + FIO_BIN=/usr/src/fio-static/fio 00:01:32.020 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:32.020 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:32.020 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:32.020 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.020 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.020 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:32.020 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.020 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.020 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:32.020 Test configuration: 00:01:32.020 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.020 SPDK_TEST_NVMF=1 00:01:32.020 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.020 SPDK_TEST_URING=1 00:01:32.020 SPDK_TEST_USDT=1 00:01:32.020 SPDK_RUN_UBSAN=1 00:01:32.020 NET_TYPE=virt 00:01:32.020 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.279 RUN_NIGHTLY=0 13:29:23 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:32.279 13:29:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:32.279 13:29:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:32.279 13:29:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:32.279 13:29:23 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:32.279 13:29:23 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:32.279 13:29:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.279 13:29:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.279 13:29:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.279 13:29:23 -- paths/export.sh@5 -- $ export PATH 00:01:32.279 13:29:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.280 13:29:23 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:32.280 13:29:23 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:32.280 13:29:23 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727789363.XXXXXX 00:01:32.280 13:29:23 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727789363.MVvPLa 00:01:32.280 13:29:23 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:32.280 13:29:23 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:01:32.280 13:29:23 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:32.280 13:29:23 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:32.280 13:29:23 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:32.280 13:29:23 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:32.280 13:29:23 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:32.280 13:29:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.280 13:29:23 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:32.280 13:29:23 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:32.280 13:29:23 -- pm/common@17 -- $ local monitor 00:01:32.280 13:29:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.280 13:29:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.280 13:29:23 -- pm/common@25 -- $ sleep 1 00:01:32.280 13:29:23 -- pm/common@21 -- $ date +%s 00:01:32.280 13:29:23 -- pm/common@21 -- $ date +%s 00:01:32.280 13:29:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727789363 00:01:32.280 13:29:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727789363 00:01:32.280 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727789363_collect-cpu-load.pm.log 00:01:32.280 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727789363_collect-vmstat.pm.log 00:01:33.212 13:29:24 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:33.212 13:29:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:33.212 13:29:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:33.212 13:29:24 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:33.212 13:29:24 -- spdk/autobuild.sh@16 -- $ date -u 00:01:33.212 Tue Oct 1 01:29:24 PM UTC 2024 00:01:33.212 13:29:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:33.212 v25.01-pre-19-g7b38c9ede 00:01:33.212 13:29:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:33.212 13:29:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:33.212 13:29:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:33.212 13:29:24 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:33.212 13:29:24 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:33.212 13:29:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.212 ************************************ 00:01:33.212 START TEST ubsan 00:01:33.212 ************************************ 00:01:33.212 using ubsan 00:01:33.212 13:29:24 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:33.212 00:01:33.212 real 0m0.000s 00:01:33.212 user 0m0.000s 00:01:33.212 sys 0m0.000s 00:01:33.212 13:29:24 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:33.212 13:29:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:33.212 ************************************ 00:01:33.212 END TEST ubsan 00:01:33.212 ************************************ 00:01:33.212 13:29:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:33.212 13:29:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:33.212 13:29:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:33.212 13:29:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:33.212 13:29:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:33.212 13:29:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:33.212 13:29:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:33.212 13:29:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:33.212 13:29:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:33.469 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:33.469 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:33.727 Using 'verbs' RDMA provider 00:01:46.914 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:01.783 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:01.783 Creating mk/config.mk...done. 00:02:01.783 Creating mk/cc.flags.mk...done. 00:02:01.783 Type 'make' to build. 00:02:01.783 13:29:52 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:01.783 13:29:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:01.783 13:29:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:01.783 13:29:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.783 ************************************ 00:02:01.783 START TEST make 00:02:01.783 ************************************ 00:02:01.783 13:29:52 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:01.783 make[1]: Nothing to be done for 'all'. 00:02:13.978 The Meson build system 00:02:13.978 Version: 1.5.0 00:02:13.978 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:13.978 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:13.978 Build type: native build 00:02:13.978 Program cat found: YES (/usr/bin/cat) 00:02:13.978 Project name: DPDK 00:02:13.978 Project version: 24.03.0 00:02:13.978 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:13.978 C linker for the host machine: cc ld.bfd 2.40-14 00:02:13.978 Host machine cpu family: x86_64 00:02:13.978 Host machine cpu: x86_64 00:02:13.978 Message: ## Building in Developer Mode ## 00:02:13.978 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:13.978 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:13.978 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:13.978 Program python3 found: YES (/usr/bin/python3) 00:02:13.978 Program cat found: YES (/usr/bin/cat) 00:02:13.978 Compiler for C supports arguments -march=native: YES 00:02:13.978 Checking for size of "void *" : 8 00:02:13.978 Checking for size of "void *" : 8 (cached) 00:02:13.978 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:13.978 Library m found: YES 00:02:13.978 Library numa found: YES 00:02:13.978 Has header "numaif.h" : YES 00:02:13.978 Library fdt found: NO 00:02:13.978 Library execinfo found: NO 00:02:13.978 Has header "execinfo.h" : YES 00:02:13.978 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:13.978 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:13.978 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:13.978 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:13.978 Run-time dependency openssl found: YES 3.1.1 00:02:13.978 Run-time dependency libpcap found: YES 1.10.4 00:02:13.978 Has header "pcap.h" with dependency libpcap: YES 00:02:13.978 Compiler for C supports arguments -Wcast-qual: YES 00:02:13.978 Compiler for C supports arguments -Wdeprecated: YES 00:02:13.978 Compiler for C supports arguments -Wformat: YES 00:02:13.978 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:13.978 Compiler for C supports arguments -Wformat-security: NO 00:02:13.978 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:13.978 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:13.978 Compiler for C supports arguments -Wnested-externs: YES 00:02:13.978 Compiler for C supports arguments -Wold-style-definition: YES 00:02:13.978 Compiler for C supports arguments -Wpointer-arith: YES 00:02:13.978 Compiler for C supports arguments -Wsign-compare: YES 00:02:13.978 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:13.978 Compiler for C supports arguments -Wundef: YES 00:02:13.978 Compiler for C supports arguments -Wwrite-strings: YES 00:02:13.978 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:13.978 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:13.978 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:13.979 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:13.979 Program objdump found: YES (/usr/bin/objdump) 00:02:13.979 Compiler for C supports arguments -mavx512f: YES 00:02:13.979 Checking if "AVX512 checking" compiles: YES 00:02:13.979 Fetching value of define "__SSE4_2__" : 1 00:02:13.979 Fetching value of define "__AES__" : 1 00:02:13.979 Fetching value of define "__AVX__" : 1 00:02:13.979 Fetching value of define "__AVX2__" : 1 00:02:13.979 Fetching value of define "__AVX512BW__" : (undefined) 00:02:13.979 Fetching value of define "__AVX512CD__" : (undefined) 00:02:13.979 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:13.979 Fetching value of define "__AVX512F__" : (undefined) 00:02:13.979 Fetching value of define "__AVX512VL__" : (undefined) 00:02:13.979 Fetching value of define "__PCLMUL__" : 1 00:02:13.979 Fetching value of define "__RDRND__" : 1 00:02:13.979 Fetching value of define "__RDSEED__" : 1 00:02:13.979 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:13.979 Fetching value of define "__znver1__" : (undefined) 00:02:13.979 Fetching value of define "__znver2__" : (undefined) 00:02:13.979 Fetching value of define "__znver3__" : (undefined) 00:02:13.979 Fetching value of define "__znver4__" : (undefined) 00:02:13.979 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:13.979 Message: lib/log: Defining dependency "log" 00:02:13.979 Message: lib/kvargs: Defining dependency "kvargs" 00:02:13.979 Message: lib/telemetry: Defining dependency "telemetry" 00:02:13.979 Checking for function "getentropy" : NO 00:02:13.979 Message: lib/eal: Defining dependency "eal" 00:02:13.979 Message: lib/ring: Defining dependency "ring" 00:02:13.979 Message: lib/rcu: Defining dependency "rcu" 00:02:13.979 Message: lib/mempool: Defining dependency "mempool" 00:02:13.979 Message: lib/mbuf: Defining dependency "mbuf" 00:02:13.979 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:13.979 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:13.979 Compiler for C supports arguments -mpclmul: YES 00:02:13.979 Compiler for C supports arguments -maes: YES 00:02:13.979 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:13.979 Compiler for C supports arguments -mavx512bw: YES 00:02:13.979 Compiler for C supports arguments -mavx512dq: YES 00:02:13.979 Compiler for C supports arguments -mavx512vl: YES 00:02:13.979 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:13.979 Compiler for C supports arguments -mavx2: YES 00:02:13.979 Compiler for C supports arguments -mavx: YES 00:02:13.979 Message: lib/net: Defining dependency "net" 00:02:13.979 Message: lib/meter: Defining dependency "meter" 00:02:13.979 Message: lib/ethdev: Defining dependency "ethdev" 00:02:13.979 Message: lib/pci: Defining dependency "pci" 00:02:13.979 Message: lib/cmdline: Defining dependency "cmdline" 00:02:13.979 Message: lib/hash: Defining dependency "hash" 00:02:13.979 Message: lib/timer: Defining dependency "timer" 00:02:13.979 Message: lib/compressdev: Defining dependency "compressdev" 00:02:13.979 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:13.979 Message: lib/dmadev: Defining dependency "dmadev" 00:02:13.979 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:13.979 Message: lib/power: Defining dependency "power" 00:02:13.979 Message: lib/reorder: Defining dependency "reorder" 00:02:13.979 Message: lib/security: Defining dependency "security" 00:02:13.979 Has header "linux/userfaultfd.h" : YES 00:02:13.979 Has header "linux/vduse.h" : YES 00:02:13.979 Message: lib/vhost: Defining dependency "vhost" 00:02:13.979 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:13.979 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:13.979 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:13.979 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:13.979 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:13.979 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:13.979 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:13.979 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:13.979 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:13.979 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:13.979 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:13.979 Configuring doxy-api-html.conf using configuration 00:02:13.979 Configuring doxy-api-man.conf using configuration 00:02:13.979 Program mandb found: YES (/usr/bin/mandb) 00:02:13.979 Program sphinx-build found: NO 00:02:13.979 Configuring rte_build_config.h using configuration 00:02:13.979 Message: 00:02:13.979 ================= 00:02:13.979 Applications Enabled 00:02:13.979 ================= 00:02:13.979 00:02:13.979 apps: 00:02:13.979 00:02:13.979 00:02:13.979 Message: 00:02:13.979 ================= 00:02:13.979 Libraries Enabled 00:02:13.979 ================= 00:02:13.979 00:02:13.979 libs: 00:02:13.979 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:13.979 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:13.979 cryptodev, dmadev, power, reorder, security, vhost, 00:02:13.979 00:02:13.979 Message: 00:02:13.979 =============== 00:02:13.979 Drivers Enabled 00:02:13.979 =============== 00:02:13.979 00:02:13.979 common: 00:02:13.979 00:02:13.979 bus: 00:02:13.979 pci, vdev, 00:02:13.979 mempool: 00:02:13.979 ring, 00:02:13.979 dma: 00:02:13.979 00:02:13.979 net: 00:02:13.979 00:02:13.979 crypto: 00:02:13.979 00:02:13.979 compress: 00:02:13.979 00:02:13.979 vdpa: 00:02:13.979 00:02:13.979 00:02:13.979 Message: 00:02:13.979 ================= 00:02:13.979 Content Skipped 00:02:13.979 ================= 00:02:13.979 00:02:13.979 apps: 00:02:13.979 dumpcap: explicitly disabled via build config 00:02:13.979 graph: explicitly disabled via build config 00:02:13.979 pdump: explicitly disabled via build config 00:02:13.979 proc-info: explicitly disabled via build config 00:02:13.979 test-acl: explicitly disabled via build config 00:02:13.979 test-bbdev: explicitly disabled via build config 00:02:13.979 test-cmdline: explicitly disabled via build config 00:02:13.979 test-compress-perf: explicitly disabled via build config 00:02:13.979 test-crypto-perf: explicitly disabled via build config 00:02:13.979 test-dma-perf: explicitly disabled via build config 00:02:13.979 test-eventdev: explicitly disabled via build config 00:02:13.979 test-fib: explicitly disabled via build config 00:02:13.979 test-flow-perf: explicitly disabled via build config 00:02:13.979 test-gpudev: explicitly disabled via build config 00:02:13.979 test-mldev: explicitly disabled via build config 00:02:13.979 test-pipeline: explicitly disabled via build config 00:02:13.979 test-pmd: explicitly disabled via build config 00:02:13.979 test-regex: explicitly disabled via build config 00:02:13.979 test-sad: explicitly disabled via build config 00:02:13.979 test-security-perf: explicitly disabled via build config 00:02:13.979 00:02:13.979 libs: 00:02:13.979 argparse: explicitly disabled via build config 00:02:13.979 metrics: explicitly disabled via build config 00:02:13.979 acl: explicitly disabled via build config 00:02:13.979 bbdev: explicitly disabled via build config 00:02:13.979 bitratestats: explicitly disabled via build config 00:02:13.979 bpf: explicitly disabled via build config 00:02:13.979 cfgfile: explicitly disabled via build config 00:02:13.979 distributor: explicitly disabled via build config 00:02:13.979 efd: explicitly disabled via build config 00:02:13.979 eventdev: explicitly disabled via build config 00:02:13.979 dispatcher: explicitly disabled via build config 00:02:13.979 gpudev: explicitly disabled via build config 00:02:13.979 gro: explicitly disabled via build config 00:02:13.979 gso: explicitly disabled via build config 00:02:13.979 ip_frag: explicitly disabled via build config 00:02:13.979 jobstats: explicitly disabled via build config 00:02:13.979 latencystats: explicitly disabled via build config 00:02:13.979 lpm: explicitly disabled via build config 00:02:13.979 member: explicitly disabled via build config 00:02:13.979 pcapng: explicitly disabled via build config 00:02:13.979 rawdev: explicitly disabled via build config 00:02:13.979 regexdev: explicitly disabled via build config 00:02:13.979 mldev: explicitly disabled via build config 00:02:13.979 rib: explicitly disabled via build config 00:02:13.979 sched: explicitly disabled via build config 00:02:13.979 stack: explicitly disabled via build config 00:02:13.979 ipsec: explicitly disabled via build config 00:02:13.979 pdcp: explicitly disabled via build config 00:02:13.979 fib: explicitly disabled via build config 00:02:13.979 port: explicitly disabled via build config 00:02:13.979 pdump: explicitly disabled via build config 00:02:13.979 table: explicitly disabled via build config 00:02:13.979 pipeline: explicitly disabled via build config 00:02:13.979 graph: explicitly disabled via build config 00:02:13.979 node: explicitly disabled via build config 00:02:13.979 00:02:13.979 drivers: 00:02:13.979 common/cpt: not in enabled drivers build config 00:02:13.979 common/dpaax: not in enabled drivers build config 00:02:13.979 common/iavf: not in enabled drivers build config 00:02:13.979 common/idpf: not in enabled drivers build config 00:02:13.979 common/ionic: not in enabled drivers build config 00:02:13.979 common/mvep: not in enabled drivers build config 00:02:13.979 common/octeontx: not in enabled drivers build config 00:02:13.979 bus/auxiliary: not in enabled drivers build config 00:02:13.979 bus/cdx: not in enabled drivers build config 00:02:13.979 bus/dpaa: not in enabled drivers build config 00:02:13.979 bus/fslmc: not in enabled drivers build config 00:02:13.979 bus/ifpga: not in enabled drivers build config 00:02:13.979 bus/platform: not in enabled drivers build config 00:02:13.979 bus/uacce: not in enabled drivers build config 00:02:13.979 bus/vmbus: not in enabled drivers build config 00:02:13.979 common/cnxk: not in enabled drivers build config 00:02:13.979 common/mlx5: not in enabled drivers build config 00:02:13.979 common/nfp: not in enabled drivers build config 00:02:13.979 common/nitrox: not in enabled drivers build config 00:02:13.979 common/qat: not in enabled drivers build config 00:02:13.979 common/sfc_efx: not in enabled drivers build config 00:02:13.979 mempool/bucket: not in enabled drivers build config 00:02:13.979 mempool/cnxk: not in enabled drivers build config 00:02:13.979 mempool/dpaa: not in enabled drivers build config 00:02:13.980 mempool/dpaa2: not in enabled drivers build config 00:02:13.980 mempool/octeontx: not in enabled drivers build config 00:02:13.980 mempool/stack: not in enabled drivers build config 00:02:13.980 dma/cnxk: not in enabled drivers build config 00:02:13.980 dma/dpaa: not in enabled drivers build config 00:02:13.980 dma/dpaa2: not in enabled drivers build config 00:02:13.980 dma/hisilicon: not in enabled drivers build config 00:02:13.980 dma/idxd: not in enabled drivers build config 00:02:13.980 dma/ioat: not in enabled drivers build config 00:02:13.980 dma/skeleton: not in enabled drivers build config 00:02:13.980 net/af_packet: not in enabled drivers build config 00:02:13.980 net/af_xdp: not in enabled drivers build config 00:02:13.980 net/ark: not in enabled drivers build config 00:02:13.980 net/atlantic: not in enabled drivers build config 00:02:13.980 net/avp: not in enabled drivers build config 00:02:13.980 net/axgbe: not in enabled drivers build config 00:02:13.980 net/bnx2x: not in enabled drivers build config 00:02:13.980 net/bnxt: not in enabled drivers build config 00:02:13.980 net/bonding: not in enabled drivers build config 00:02:13.980 net/cnxk: not in enabled drivers build config 00:02:13.980 net/cpfl: not in enabled drivers build config 00:02:13.980 net/cxgbe: not in enabled drivers build config 00:02:13.980 net/dpaa: not in enabled drivers build config 00:02:13.980 net/dpaa2: not in enabled drivers build config 00:02:13.980 net/e1000: not in enabled drivers build config 00:02:13.980 net/ena: not in enabled drivers build config 00:02:13.980 net/enetc: not in enabled drivers build config 00:02:13.980 net/enetfec: not in enabled drivers build config 00:02:13.980 net/enic: not in enabled drivers build config 00:02:13.980 net/failsafe: not in enabled drivers build config 00:02:13.980 net/fm10k: not in enabled drivers build config 00:02:13.980 net/gve: not in enabled drivers build config 00:02:13.980 net/hinic: not in enabled drivers build config 00:02:13.980 net/hns3: not in enabled drivers build config 00:02:13.980 net/i40e: not in enabled drivers build config 00:02:13.980 net/iavf: not in enabled drivers build config 00:02:13.980 net/ice: not in enabled drivers build config 00:02:13.980 net/idpf: not in enabled drivers build config 00:02:13.980 net/igc: not in enabled drivers build config 00:02:13.980 net/ionic: not in enabled drivers build config 00:02:13.980 net/ipn3ke: not in enabled drivers build config 00:02:13.980 net/ixgbe: not in enabled drivers build config 00:02:13.980 net/mana: not in enabled drivers build config 00:02:13.980 net/memif: not in enabled drivers build config 00:02:13.980 net/mlx4: not in enabled drivers build config 00:02:13.980 net/mlx5: not in enabled drivers build config 00:02:13.980 net/mvneta: not in enabled drivers build config 00:02:13.980 net/mvpp2: not in enabled drivers build config 00:02:13.980 net/netvsc: not in enabled drivers build config 00:02:13.980 net/nfb: not in enabled drivers build config 00:02:13.980 net/nfp: not in enabled drivers build config 00:02:13.980 net/ngbe: not in enabled drivers build config 00:02:13.980 net/null: not in enabled drivers build config 00:02:13.980 net/octeontx: not in enabled drivers build config 00:02:13.980 net/octeon_ep: not in enabled drivers build config 00:02:13.980 net/pcap: not in enabled drivers build config 00:02:13.980 net/pfe: not in enabled drivers build config 00:02:13.980 net/qede: not in enabled drivers build config 00:02:13.980 net/ring: not in enabled drivers build config 00:02:13.980 net/sfc: not in enabled drivers build config 00:02:13.980 net/softnic: not in enabled drivers build config 00:02:13.980 net/tap: not in enabled drivers build config 00:02:13.980 net/thunderx: not in enabled drivers build config 00:02:13.980 net/txgbe: not in enabled drivers build config 00:02:13.980 net/vdev_netvsc: not in enabled drivers build config 00:02:13.980 net/vhost: not in enabled drivers build config 00:02:13.980 net/virtio: not in enabled drivers build config 00:02:13.980 net/vmxnet3: not in enabled drivers build config 00:02:13.980 raw/*: missing internal dependency, "rawdev" 00:02:13.980 crypto/armv8: not in enabled drivers build config 00:02:13.980 crypto/bcmfs: not in enabled drivers build config 00:02:13.980 crypto/caam_jr: not in enabled drivers build config 00:02:13.980 crypto/ccp: not in enabled drivers build config 00:02:13.980 crypto/cnxk: not in enabled drivers build config 00:02:13.980 crypto/dpaa_sec: not in enabled drivers build config 00:02:13.980 crypto/dpaa2_sec: not in enabled drivers build config 00:02:13.980 crypto/ipsec_mb: not in enabled drivers build config 00:02:13.980 crypto/mlx5: not in enabled drivers build config 00:02:13.980 crypto/mvsam: not in enabled drivers build config 00:02:13.980 crypto/nitrox: not in enabled drivers build config 00:02:13.980 crypto/null: not in enabled drivers build config 00:02:13.980 crypto/octeontx: not in enabled drivers build config 00:02:13.980 crypto/openssl: not in enabled drivers build config 00:02:13.980 crypto/scheduler: not in enabled drivers build config 00:02:13.980 crypto/uadk: not in enabled drivers build config 00:02:13.980 crypto/virtio: not in enabled drivers build config 00:02:13.980 compress/isal: not in enabled drivers build config 00:02:13.980 compress/mlx5: not in enabled drivers build config 00:02:13.980 compress/nitrox: not in enabled drivers build config 00:02:13.980 compress/octeontx: not in enabled drivers build config 00:02:13.980 compress/zlib: not in enabled drivers build config 00:02:13.980 regex/*: missing internal dependency, "regexdev" 00:02:13.980 ml/*: missing internal dependency, "mldev" 00:02:13.980 vdpa/ifc: not in enabled drivers build config 00:02:13.980 vdpa/mlx5: not in enabled drivers build config 00:02:13.980 vdpa/nfp: not in enabled drivers build config 00:02:13.980 vdpa/sfc: not in enabled drivers build config 00:02:13.980 event/*: missing internal dependency, "eventdev" 00:02:13.980 baseband/*: missing internal dependency, "bbdev" 00:02:13.980 gpu/*: missing internal dependency, "gpudev" 00:02:13.980 00:02:13.980 00:02:13.980 Build targets in project: 85 00:02:13.980 00:02:13.980 DPDK 24.03.0 00:02:13.980 00:02:13.980 User defined options 00:02:13.980 buildtype : debug 00:02:13.980 default_library : shared 00:02:13.980 libdir : lib 00:02:13.980 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:13.980 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:13.980 c_link_args : 00:02:13.980 cpu_instruction_set: native 00:02:13.980 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:13.980 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:13.980 enable_docs : false 00:02:13.980 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:13.980 enable_kmods : false 00:02:13.980 max_lcores : 128 00:02:13.980 tests : false 00:02:13.980 00:02:13.980 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.238 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:14.497 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:14.497 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:14.497 [3/268] Linking static target lib/librte_kvargs.a 00:02:14.497 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:14.497 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:14.497 [6/268] Linking static target lib/librte_log.a 00:02:15.063 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.063 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.063 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:15.063 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:15.063 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:15.321 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.321 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:15.321 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.321 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:15.321 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.321 [17/268] Linking target lib/librte_log.so.24.1 00:02:15.321 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:15.321 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:15.321 [20/268] Linking static target lib/librte_telemetry.a 00:02:15.579 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:15.837 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:15.837 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:16.096 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:16.096 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:16.096 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:16.096 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:16.096 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:16.096 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:16.353 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:16.353 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:16.353 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:16.353 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.353 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:16.353 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:16.609 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:16.865 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:16.865 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:16.865 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:16.865 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:16.865 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:16.865 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:16.865 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:17.122 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:17.122 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:17.122 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:17.122 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:17.378 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:17.378 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:17.635 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:17.635 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:17.893 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:17.893 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:18.150 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:18.150 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:18.150 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:18.150 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:18.150 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:18.150 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:18.407 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:18.407 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:18.407 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:18.665 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:18.665 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:18.665 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:18.923 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:18.923 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:18.923 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:18.923 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:19.181 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:19.181 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:19.181 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:19.181 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:19.181 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:19.181 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:19.439 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:19.439 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:19.439 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:19.439 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:19.697 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:19.697 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:19.697 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:19.955 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:19.955 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:19.955 [85/268] Linking static target lib/librte_ring.a 00:02:19.955 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:20.213 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:20.213 [88/268] Linking static target lib/librte_rcu.a 00:02:20.213 [89/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:20.213 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:20.213 [91/268] Linking static target lib/librte_eal.a 00:02:20.471 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:20.471 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:20.471 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.471 [95/268] Linking static target lib/librte_mempool.a 00:02:20.471 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:20.471 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.729 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:20.729 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:20.729 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:20.729 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:20.987 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:21.246 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:21.246 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:21.246 [105/268] Linking static target lib/librte_mbuf.a 00:02:21.246 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:21.246 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:21.246 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:21.504 [109/268] Linking static target lib/librte_net.a 00:02:21.504 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:21.504 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:21.504 [112/268] Linking static target lib/librte_meter.a 00:02:21.764 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.764 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:21.764 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.764 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:22.042 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.042 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:22.309 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.567 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:22.567 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:22.567 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:22.825 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:23.083 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:23.083 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:23.083 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:23.083 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:23.083 [128/268] Linking static target lib/librte_pci.a 00:02:23.341 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:23.341 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:23.341 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:23.341 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:23.341 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:23.341 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:23.600 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:23.600 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:23.600 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:23.600 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:23.600 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.600 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:23.600 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:23.600 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:23.600 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:23.600 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:23.600 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:23.600 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:23.600 [147/268] Linking static target lib/librte_ethdev.a 00:02:23.858 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:23.858 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:24.117 [150/268] Linking static target lib/librte_cmdline.a 00:02:24.117 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:24.374 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.374 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:24.374 [154/268] Linking static target lib/librte_timer.a 00:02:24.374 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:24.632 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.632 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:24.632 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:24.632 [159/268] Linking static target lib/librte_hash.a 00:02:24.891 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.891 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:24.891 [162/268] Linking static target lib/librte_compressdev.a 00:02:24.891 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.169 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:25.169 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:25.428 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:25.428 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:25.428 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:25.686 [169/268] Linking static target lib/librte_dmadev.a 00:02:25.686 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.686 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:25.686 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:25.686 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:25.686 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.944 [175/268] Linking static target lib/librte_cryptodev.a 00:02:25.944 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.944 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:25.944 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.202 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:26.202 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:26.579 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:26.579 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:26.579 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.579 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:26.838 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:26.838 [186/268] Linking static target lib/librte_power.a 00:02:27.096 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:27.096 [188/268] Linking static target lib/librte_reorder.a 00:02:27.096 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:27.096 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:27.096 [191/268] Linking static target lib/librte_security.a 00:02:27.096 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:27.356 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:27.613 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.613 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:27.871 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.129 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.129 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:28.129 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:28.387 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.387 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:28.387 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:28.645 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:28.645 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:28.903 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:28.903 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:28.903 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:28.903 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:29.160 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:29.160 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:29.160 [211/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:29.419 [212/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:29.419 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:29.419 [214/268] Linking static target drivers/librte_bus_vdev.a 00:02:29.419 [215/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:29.419 [216/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:29.419 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:29.419 [218/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:29.419 [219/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:29.677 [220/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.677 [221/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.677 [222/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:29.677 [223/268] Linking static target drivers/librte_mempool_ring.a 00:02:29.677 [224/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.677 [225/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:29.677 [226/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:29.677 [227/268] Linking static target drivers/librte_bus_pci.a 00:02:30.242 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.808 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:30.808 [230/268] Linking static target lib/librte_vhost.a 00:02:31.375 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.633 [232/268] Linking target lib/librte_eal.so.24.1 00:02:31.633 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:31.633 [234/268] Linking target lib/librte_timer.so.24.1 00:02:31.633 [235/268] Linking target lib/librte_pci.so.24.1 00:02:31.633 [236/268] Linking target lib/librte_meter.so.24.1 00:02:31.633 [237/268] Linking target lib/librte_ring.so.24.1 00:02:31.633 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:31.633 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:31.633 [240/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.891 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:31.891 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:31.891 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:31.891 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:31.891 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:31.891 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:31.891 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:31.891 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:31.891 [249/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.150 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:32.150 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:32.150 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:32.150 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:32.150 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:32.409 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:32.409 [256/268] Linking target lib/librte_net.so.24.1 00:02:32.409 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:32.409 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:32.409 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:32.409 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:32.409 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:32.409 [262/268] Linking target lib/librte_hash.so.24.1 00:02:32.409 [263/268] Linking target lib/librte_security.so.24.1 00:02:32.409 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:32.667 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:32.667 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:32.667 [267/268] Linking target lib/librte_power.so.24.1 00:02:32.667 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:32.667 INFO: autodetecting backend as ninja 00:02:32.667 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:59.203 CC lib/ut/ut.o 00:02:59.203 CC lib/ut_mock/mock.o 00:02:59.203 CC lib/log/log.o 00:02:59.203 CC lib/log/log_flags.o 00:02:59.203 CC lib/log/log_deprecated.o 00:02:59.203 LIB libspdk_log.a 00:02:59.203 LIB libspdk_ut.a 00:02:59.203 LIB libspdk_ut_mock.a 00:02:59.203 SO libspdk_ut.so.2.0 00:02:59.203 SO libspdk_ut_mock.so.6.0 00:02:59.203 SO libspdk_log.so.7.0 00:02:59.203 SYMLINK libspdk_ut_mock.so 00:02:59.203 SYMLINK libspdk_ut.so 00:02:59.203 SYMLINK libspdk_log.so 00:02:59.203 CC lib/util/bit_array.o 00:02:59.203 CC lib/ioat/ioat.o 00:02:59.203 CC lib/util/base64.o 00:02:59.203 CC lib/util/cpuset.o 00:02:59.203 CC lib/util/crc16.o 00:02:59.203 CC lib/util/crc32c.o 00:02:59.203 CC lib/util/crc32.o 00:02:59.203 CXX lib/trace_parser/trace.o 00:02:59.203 CC lib/dma/dma.o 00:02:59.203 CC lib/vfio_user/host/vfio_user_pci.o 00:02:59.203 CC lib/util/crc32_ieee.o 00:02:59.203 CC lib/util/crc64.o 00:02:59.203 CC lib/util/dif.o 00:02:59.203 CC lib/util/fd.o 00:02:59.203 CC lib/util/fd_group.o 00:02:59.203 LIB libspdk_dma.a 00:02:59.203 CC lib/util/file.o 00:02:59.203 SO libspdk_dma.so.5.0 00:02:59.203 CC lib/vfio_user/host/vfio_user.o 00:02:59.203 CC lib/util/hexlify.o 00:02:59.203 LIB libspdk_ioat.a 00:02:59.203 SYMLINK libspdk_dma.so 00:02:59.203 CC lib/util/iov.o 00:02:59.203 CC lib/util/math.o 00:02:59.203 SO libspdk_ioat.so.7.0 00:02:59.203 SYMLINK libspdk_ioat.so 00:02:59.204 CC lib/util/net.o 00:02:59.204 CC lib/util/pipe.o 00:02:59.204 CC lib/util/strerror_tls.o 00:02:59.204 CC lib/util/string.o 00:02:59.204 CC lib/util/uuid.o 00:02:59.204 LIB libspdk_vfio_user.a 00:02:59.204 CC lib/util/xor.o 00:02:59.204 SO libspdk_vfio_user.so.5.0 00:02:59.204 CC lib/util/zipf.o 00:02:59.204 CC lib/util/md5.o 00:02:59.204 SYMLINK libspdk_vfio_user.so 00:02:59.204 LIB libspdk_util.a 00:02:59.204 SO libspdk_util.so.10.0 00:02:59.204 LIB libspdk_trace_parser.a 00:02:59.204 SYMLINK libspdk_util.so 00:02:59.204 SO libspdk_trace_parser.so.6.0 00:02:59.204 SYMLINK libspdk_trace_parser.so 00:02:59.204 CC lib/conf/conf.o 00:02:59.204 CC lib/idxd/idxd.o 00:02:59.204 CC lib/rdma_utils/rdma_utils.o 00:02:59.204 CC lib/idxd/idxd_user.o 00:02:59.204 CC lib/idxd/idxd_kernel.o 00:02:59.204 CC lib/env_dpdk/env.o 00:02:59.204 CC lib/env_dpdk/memory.o 00:02:59.204 CC lib/vmd/vmd.o 00:02:59.204 CC lib/rdma_provider/common.o 00:02:59.204 CC lib/json/json_parse.o 00:02:59.204 CC lib/vmd/led.o 00:02:59.204 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:59.204 CC lib/env_dpdk/pci.o 00:02:59.204 CC lib/json/json_util.o 00:02:59.204 LIB libspdk_conf.a 00:02:59.204 SO libspdk_conf.so.6.0 00:02:59.204 LIB libspdk_rdma_utils.a 00:02:59.204 SO libspdk_rdma_utils.so.1.0 00:02:59.204 SYMLINK libspdk_conf.so 00:02:59.204 CC lib/json/json_write.o 00:02:59.204 CC lib/env_dpdk/init.o 00:02:59.204 SYMLINK libspdk_rdma_utils.so 00:02:59.204 CC lib/env_dpdk/threads.o 00:02:59.204 LIB libspdk_rdma_provider.a 00:02:59.204 SO libspdk_rdma_provider.so.6.0 00:02:59.204 CC lib/env_dpdk/pci_ioat.o 00:02:59.204 SYMLINK libspdk_rdma_provider.so 00:02:59.204 CC lib/env_dpdk/pci_virtio.o 00:02:59.204 CC lib/env_dpdk/pci_vmd.o 00:02:59.204 LIB libspdk_idxd.a 00:02:59.204 CC lib/env_dpdk/pci_idxd.o 00:02:59.204 SO libspdk_idxd.so.12.1 00:02:59.204 LIB libspdk_json.a 00:02:59.204 CC lib/env_dpdk/pci_event.o 00:02:59.204 CC lib/env_dpdk/sigbus_handler.o 00:02:59.204 LIB libspdk_vmd.a 00:02:59.204 SO libspdk_json.so.6.0 00:02:59.204 CC lib/env_dpdk/pci_dpdk.o 00:02:59.204 SO libspdk_vmd.so.6.0 00:02:59.204 SYMLINK libspdk_idxd.so 00:02:59.204 SYMLINK libspdk_json.so 00:02:59.204 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:59.204 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:59.204 SYMLINK libspdk_vmd.so 00:02:59.204 CC lib/jsonrpc/jsonrpc_client.o 00:02:59.204 CC lib/jsonrpc/jsonrpc_server.o 00:02:59.204 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:59.204 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:59.204 LIB libspdk_jsonrpc.a 00:02:59.204 SO libspdk_jsonrpc.so.6.0 00:02:59.204 SYMLINK libspdk_jsonrpc.so 00:02:59.204 LIB libspdk_env_dpdk.a 00:02:59.204 SO libspdk_env_dpdk.so.15.0 00:02:59.204 CC lib/rpc/rpc.o 00:02:59.204 SYMLINK libspdk_env_dpdk.so 00:02:59.204 LIB libspdk_rpc.a 00:02:59.204 SO libspdk_rpc.so.6.0 00:02:59.204 SYMLINK libspdk_rpc.so 00:02:59.204 CC lib/keyring/keyring.o 00:02:59.204 CC lib/trace/trace.o 00:02:59.204 CC lib/trace/trace_flags.o 00:02:59.204 CC lib/trace/trace_rpc.o 00:02:59.204 CC lib/keyring/keyring_rpc.o 00:02:59.204 CC lib/notify/notify.o 00:02:59.204 CC lib/notify/notify_rpc.o 00:02:59.204 LIB libspdk_notify.a 00:02:59.204 SO libspdk_notify.so.6.0 00:02:59.204 LIB libspdk_keyring.a 00:02:59.204 SO libspdk_keyring.so.2.0 00:02:59.204 LIB libspdk_trace.a 00:02:59.204 SYMLINK libspdk_notify.so 00:02:59.204 SYMLINK libspdk_keyring.so 00:02:59.204 SO libspdk_trace.so.11.0 00:02:59.204 SYMLINK libspdk_trace.so 00:02:59.463 CC lib/thread/iobuf.o 00:02:59.463 CC lib/thread/thread.o 00:02:59.463 CC lib/sock/sock.o 00:02:59.463 CC lib/sock/sock_rpc.o 00:03:00.030 LIB libspdk_sock.a 00:03:00.030 SO libspdk_sock.so.10.0 00:03:00.030 SYMLINK libspdk_sock.so 00:03:00.288 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:00.288 CC lib/nvme/nvme_ctrlr.o 00:03:00.288 CC lib/nvme/nvme_fabric.o 00:03:00.288 CC lib/nvme/nvme_ns_cmd.o 00:03:00.288 CC lib/nvme/nvme_pcie_common.o 00:03:00.288 CC lib/nvme/nvme_ns.o 00:03:00.288 CC lib/nvme/nvme_pcie.o 00:03:00.288 CC lib/nvme/nvme.o 00:03:00.288 CC lib/nvme/nvme_qpair.o 00:03:00.854 LIB libspdk_thread.a 00:03:00.854 SO libspdk_thread.so.10.1 00:03:00.854 SYMLINK libspdk_thread.so 00:03:00.854 CC lib/nvme/nvme_quirks.o 00:03:01.112 CC lib/nvme/nvme_transport.o 00:03:01.112 CC lib/nvme/nvme_discovery.o 00:03:01.112 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:01.112 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:01.370 CC lib/nvme/nvme_tcp.o 00:03:01.370 CC lib/nvme/nvme_opal.o 00:03:01.370 CC lib/nvme/nvme_io_msg.o 00:03:01.627 CC lib/nvme/nvme_poll_group.o 00:03:01.628 CC lib/accel/accel.o 00:03:01.885 CC lib/nvme/nvme_zns.o 00:03:01.885 CC lib/nvme/nvme_stubs.o 00:03:01.885 CC lib/nvme/nvme_auth.o 00:03:01.885 CC lib/blob/blobstore.o 00:03:01.885 CC lib/nvme/nvme_cuse.o 00:03:01.885 CC lib/init/json_config.o 00:03:02.142 CC lib/accel/accel_rpc.o 00:03:02.142 CC lib/init/subsystem.o 00:03:02.404 CC lib/accel/accel_sw.o 00:03:02.404 CC lib/nvme/nvme_rdma.o 00:03:02.404 CC lib/blob/request.o 00:03:02.404 CC lib/init/subsystem_rpc.o 00:03:02.662 CC lib/blob/zeroes.o 00:03:02.662 CC lib/init/rpc.o 00:03:02.662 CC lib/blob/blob_bs_dev.o 00:03:02.662 LIB libspdk_init.a 00:03:02.662 SO libspdk_init.so.6.0 00:03:02.920 LIB libspdk_accel.a 00:03:02.920 SYMLINK libspdk_init.so 00:03:02.920 SO libspdk_accel.so.16.0 00:03:02.920 CC lib/virtio/virtio.o 00:03:02.920 CC lib/virtio/virtio_vhost_user.o 00:03:02.920 CC lib/virtio/virtio_vfio_user.o 00:03:02.920 CC lib/virtio/virtio_pci.o 00:03:02.920 CC lib/fsdev/fsdev.o 00:03:02.920 SYMLINK libspdk_accel.so 00:03:02.920 CC lib/fsdev/fsdev_io.o 00:03:02.920 CC lib/fsdev/fsdev_rpc.o 00:03:02.920 CC lib/event/app.o 00:03:03.178 CC lib/event/reactor.o 00:03:03.178 CC lib/event/log_rpc.o 00:03:03.178 CC lib/event/app_rpc.o 00:03:03.178 LIB libspdk_virtio.a 00:03:03.178 CC lib/bdev/bdev.o 00:03:03.178 SO libspdk_virtio.so.7.0 00:03:03.178 CC lib/bdev/bdev_rpc.o 00:03:03.438 SYMLINK libspdk_virtio.so 00:03:03.438 CC lib/bdev/bdev_zone.o 00:03:03.438 CC lib/event/scheduler_static.o 00:03:03.438 CC lib/bdev/part.o 00:03:03.438 CC lib/bdev/scsi_nvme.o 00:03:03.697 LIB libspdk_event.a 00:03:03.697 SO libspdk_event.so.14.0 00:03:03.697 LIB libspdk_fsdev.a 00:03:03.697 SO libspdk_fsdev.so.1.0 00:03:03.697 SYMLINK libspdk_event.so 00:03:03.955 SYMLINK libspdk_fsdev.so 00:03:03.955 LIB libspdk_nvme.a 00:03:04.214 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:04.214 SO libspdk_nvme.so.14.0 00:03:04.473 SYMLINK libspdk_nvme.so 00:03:04.731 LIB libspdk_fuse_dispatcher.a 00:03:04.731 SO libspdk_fuse_dispatcher.so.1.0 00:03:04.989 SYMLINK libspdk_fuse_dispatcher.so 00:03:05.248 LIB libspdk_blob.a 00:03:05.248 SO libspdk_blob.so.11.0 00:03:05.248 SYMLINK libspdk_blob.so 00:03:05.506 CC lib/blobfs/blobfs.o 00:03:05.506 CC lib/blobfs/tree.o 00:03:05.506 CC lib/lvol/lvol.o 00:03:06.072 LIB libspdk_bdev.a 00:03:06.331 SO libspdk_bdev.so.16.0 00:03:06.331 SYMLINK libspdk_bdev.so 00:03:06.589 LIB libspdk_blobfs.a 00:03:06.589 LIB libspdk_lvol.a 00:03:06.589 SO libspdk_blobfs.so.10.0 00:03:06.589 SO libspdk_lvol.so.10.0 00:03:06.589 CC lib/scsi/dev.o 00:03:06.589 CC lib/ftl/ftl_core.o 00:03:06.589 CC lib/ftl/ftl_init.o 00:03:06.589 CC lib/ftl/ftl_layout.o 00:03:06.589 CC lib/nvmf/ctrlr.o 00:03:06.589 CC lib/scsi/lun.o 00:03:06.589 CC lib/ublk/ublk.o 00:03:06.589 CC lib/nbd/nbd.o 00:03:06.589 SYMLINK libspdk_blobfs.so 00:03:06.589 CC lib/scsi/port.o 00:03:06.589 SYMLINK libspdk_lvol.so 00:03:06.589 CC lib/scsi/scsi.o 00:03:06.847 CC lib/ublk/ublk_rpc.o 00:03:06.847 CC lib/ftl/ftl_debug.o 00:03:06.847 CC lib/ftl/ftl_io.o 00:03:06.847 CC lib/scsi/scsi_bdev.o 00:03:06.847 CC lib/scsi/scsi_pr.o 00:03:06.847 CC lib/scsi/scsi_rpc.o 00:03:07.104 CC lib/scsi/task.o 00:03:07.104 CC lib/nbd/nbd_rpc.o 00:03:07.104 CC lib/ftl/ftl_sb.o 00:03:07.104 CC lib/ftl/ftl_l2p.o 00:03:07.104 CC lib/ftl/ftl_l2p_flat.o 00:03:07.104 CC lib/ftl/ftl_nv_cache.o 00:03:07.104 LIB libspdk_nbd.a 00:03:07.104 CC lib/ftl/ftl_band.o 00:03:07.104 SO libspdk_nbd.so.7.0 00:03:07.104 LIB libspdk_ublk.a 00:03:07.104 CC lib/ftl/ftl_band_ops.o 00:03:07.361 CC lib/nvmf/ctrlr_discovery.o 00:03:07.361 SO libspdk_ublk.so.3.0 00:03:07.361 CC lib/nvmf/ctrlr_bdev.o 00:03:07.361 SYMLINK libspdk_nbd.so 00:03:07.361 CC lib/nvmf/subsystem.o 00:03:07.361 SYMLINK libspdk_ublk.so 00:03:07.361 CC lib/ftl/ftl_writer.o 00:03:07.361 CC lib/ftl/ftl_rq.o 00:03:07.361 LIB libspdk_scsi.a 00:03:07.361 SO libspdk_scsi.so.9.0 00:03:07.618 SYMLINK libspdk_scsi.so 00:03:07.618 CC lib/ftl/ftl_reloc.o 00:03:07.618 CC lib/nvmf/nvmf.o 00:03:07.618 CC lib/nvmf/nvmf_rpc.o 00:03:07.618 CC lib/nvmf/transport.o 00:03:07.618 CC lib/ftl/ftl_l2p_cache.o 00:03:07.618 CC lib/nvmf/tcp.o 00:03:07.876 CC lib/nvmf/stubs.o 00:03:07.876 CC lib/nvmf/mdns_server.o 00:03:08.134 CC lib/ftl/ftl_p2l.o 00:03:08.391 CC lib/iscsi/conn.o 00:03:08.391 CC lib/nvmf/rdma.o 00:03:08.391 CC lib/ftl/ftl_p2l_log.o 00:03:08.391 CC lib/nvmf/auth.o 00:03:08.392 CC lib/ftl/mngt/ftl_mngt.o 00:03:08.392 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:08.392 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:08.392 CC lib/vhost/vhost.o 00:03:08.649 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:08.649 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:08.649 CC lib/vhost/vhost_rpc.o 00:03:08.649 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:08.649 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:08.907 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:08.907 CC lib/iscsi/init_grp.o 00:03:08.907 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:08.907 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:08.907 CC lib/vhost/vhost_scsi.o 00:03:09.166 CC lib/iscsi/iscsi.o 00:03:09.166 CC lib/iscsi/param.o 00:03:09.166 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:09.166 CC lib/iscsi/portal_grp.o 00:03:09.166 CC lib/iscsi/tgt_node.o 00:03:09.166 CC lib/iscsi/iscsi_subsystem.o 00:03:09.424 CC lib/vhost/vhost_blk.o 00:03:09.424 CC lib/vhost/rte_vhost_user.o 00:03:09.424 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:09.424 CC lib/iscsi/iscsi_rpc.o 00:03:09.683 CC lib/iscsi/task.o 00:03:09.683 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:09.683 CC lib/ftl/utils/ftl_conf.o 00:03:09.683 CC lib/ftl/utils/ftl_md.o 00:03:09.942 CC lib/ftl/utils/ftl_mempool.o 00:03:09.942 CC lib/ftl/utils/ftl_bitmap.o 00:03:09.942 CC lib/ftl/utils/ftl_property.o 00:03:09.942 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:09.942 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:09.942 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:10.201 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:10.201 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:10.201 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:10.201 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:10.201 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:10.201 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:10.459 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:10.459 LIB libspdk_nvmf.a 00:03:10.459 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:10.459 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:10.459 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:10.459 LIB libspdk_vhost.a 00:03:10.459 CC lib/ftl/base/ftl_base_dev.o 00:03:10.459 LIB libspdk_iscsi.a 00:03:10.459 CC lib/ftl/base/ftl_base_bdev.o 00:03:10.459 SO libspdk_nvmf.so.19.0 00:03:10.459 SO libspdk_vhost.so.8.0 00:03:10.717 SO libspdk_iscsi.so.8.0 00:03:10.717 CC lib/ftl/ftl_trace.o 00:03:10.717 SYMLINK libspdk_vhost.so 00:03:10.717 SYMLINK libspdk_nvmf.so 00:03:10.717 SYMLINK libspdk_iscsi.so 00:03:10.975 LIB libspdk_ftl.a 00:03:11.234 SO libspdk_ftl.so.9.0 00:03:11.493 SYMLINK libspdk_ftl.so 00:03:11.752 CC module/env_dpdk/env_dpdk_rpc.o 00:03:12.011 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:12.011 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:12.011 CC module/sock/posix/posix.o 00:03:12.011 CC module/scheduler/gscheduler/gscheduler.o 00:03:12.011 CC module/blob/bdev/blob_bdev.o 00:03:12.011 CC module/sock/uring/uring.o 00:03:12.011 CC module/accel/error/accel_error.o 00:03:12.011 CC module/keyring/file/keyring.o 00:03:12.011 CC module/fsdev/aio/fsdev_aio.o 00:03:12.011 LIB libspdk_env_dpdk_rpc.a 00:03:12.011 SO libspdk_env_dpdk_rpc.so.6.0 00:03:12.011 SYMLINK libspdk_env_dpdk_rpc.so 00:03:12.011 CC module/keyring/file/keyring_rpc.o 00:03:12.011 LIB libspdk_scheduler_dpdk_governor.a 00:03:12.011 LIB libspdk_scheduler_gscheduler.a 00:03:12.011 CC module/accel/error/accel_error_rpc.o 00:03:12.011 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:12.011 SO libspdk_scheduler_gscheduler.so.4.0 00:03:12.011 LIB libspdk_scheduler_dynamic.a 00:03:12.269 SO libspdk_scheduler_dynamic.so.4.0 00:03:12.269 SYMLINK libspdk_scheduler_gscheduler.so 00:03:12.269 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:12.269 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:12.269 SYMLINK libspdk_scheduler_dynamic.so 00:03:12.269 CC module/fsdev/aio/linux_aio_mgr.o 00:03:12.269 LIB libspdk_blob_bdev.a 00:03:12.269 LIB libspdk_keyring_file.a 00:03:12.269 SO libspdk_blob_bdev.so.11.0 00:03:12.269 SO libspdk_keyring_file.so.2.0 00:03:12.269 LIB libspdk_accel_error.a 00:03:12.269 CC module/accel/ioat/accel_ioat.o 00:03:12.269 SO libspdk_accel_error.so.2.0 00:03:12.269 SYMLINK libspdk_blob_bdev.so 00:03:12.269 SYMLINK libspdk_keyring_file.so 00:03:12.269 CC module/accel/ioat/accel_ioat_rpc.o 00:03:12.269 SYMLINK libspdk_accel_error.so 00:03:12.527 CC module/keyring/linux/keyring.o 00:03:12.527 CC module/keyring/linux/keyring_rpc.o 00:03:12.527 LIB libspdk_accel_ioat.a 00:03:12.527 SO libspdk_accel_ioat.so.6.0 00:03:12.528 CC module/accel/dsa/accel_dsa.o 00:03:12.528 CC module/accel/iaa/accel_iaa.o 00:03:12.528 CC module/accel/dsa/accel_dsa_rpc.o 00:03:12.528 SYMLINK libspdk_accel_ioat.so 00:03:12.528 LIB libspdk_fsdev_aio.a 00:03:12.528 LIB libspdk_keyring_linux.a 00:03:12.528 LIB libspdk_sock_uring.a 00:03:12.528 SO libspdk_fsdev_aio.so.1.0 00:03:12.528 SO libspdk_keyring_linux.so.1.0 00:03:12.528 SO libspdk_sock_uring.so.5.0 00:03:12.528 CC module/bdev/delay/vbdev_delay.o 00:03:12.786 LIB libspdk_sock_posix.a 00:03:12.786 SO libspdk_sock_posix.so.6.0 00:03:12.786 SYMLINK libspdk_keyring_linux.so 00:03:12.786 SYMLINK libspdk_sock_uring.so 00:03:12.786 SYMLINK libspdk_fsdev_aio.so 00:03:12.786 CC module/accel/iaa/accel_iaa_rpc.o 00:03:12.786 CC module/blobfs/bdev/blobfs_bdev.o 00:03:12.786 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:12.786 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:12.786 CC module/bdev/error/vbdev_error.o 00:03:12.786 SYMLINK libspdk_sock_posix.so 00:03:12.786 CC module/bdev/error/vbdev_error_rpc.o 00:03:12.786 LIB libspdk_accel_dsa.a 00:03:12.786 SO libspdk_accel_dsa.so.5.0 00:03:12.786 LIB libspdk_accel_iaa.a 00:03:13.045 SO libspdk_accel_iaa.so.3.0 00:03:13.045 LIB libspdk_blobfs_bdev.a 00:03:13.045 SYMLINK libspdk_accel_dsa.so 00:03:13.045 CC module/bdev/lvol/vbdev_lvol.o 00:03:13.045 SO libspdk_blobfs_bdev.so.6.0 00:03:13.045 CC module/bdev/gpt/gpt.o 00:03:13.045 SYMLINK libspdk_accel_iaa.so 00:03:13.045 CC module/bdev/gpt/vbdev_gpt.o 00:03:13.045 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:13.045 SYMLINK libspdk_blobfs_bdev.so 00:03:13.045 LIB libspdk_bdev_delay.a 00:03:13.045 LIB libspdk_bdev_error.a 00:03:13.045 SO libspdk_bdev_delay.so.6.0 00:03:13.045 SO libspdk_bdev_error.so.6.0 00:03:13.045 CC module/bdev/malloc/bdev_malloc.o 00:03:13.045 CC module/bdev/nvme/bdev_nvme.o 00:03:13.045 CC module/bdev/null/bdev_null.o 00:03:13.045 SYMLINK libspdk_bdev_delay.so 00:03:13.045 CC module/bdev/null/bdev_null_rpc.o 00:03:13.303 SYMLINK libspdk_bdev_error.so 00:03:13.303 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:13.303 CC module/bdev/passthru/vbdev_passthru.o 00:03:13.303 LIB libspdk_bdev_gpt.a 00:03:13.303 CC module/bdev/raid/bdev_raid.o 00:03:13.303 SO libspdk_bdev_gpt.so.6.0 00:03:13.303 CC module/bdev/raid/bdev_raid_rpc.o 00:03:13.303 LIB libspdk_bdev_null.a 00:03:13.562 SYMLINK libspdk_bdev_gpt.so 00:03:13.562 CC module/bdev/raid/bdev_raid_sb.o 00:03:13.562 SO libspdk_bdev_null.so.6.0 00:03:13.562 LIB libspdk_bdev_lvol.a 00:03:13.562 LIB libspdk_bdev_malloc.a 00:03:13.562 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:13.562 SO libspdk_bdev_lvol.so.6.0 00:03:13.562 CC module/bdev/split/vbdev_split.o 00:03:13.562 SO libspdk_bdev_malloc.so.6.0 00:03:13.562 SYMLINK libspdk_bdev_null.so 00:03:13.562 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:13.562 SYMLINK libspdk_bdev_lvol.so 00:03:13.562 SYMLINK libspdk_bdev_malloc.so 00:03:13.562 CC module/bdev/raid/raid0.o 00:03:13.821 LIB libspdk_bdev_passthru.a 00:03:13.821 CC module/bdev/raid/raid1.o 00:03:13.821 CC module/bdev/uring/bdev_uring.o 00:03:13.821 SO libspdk_bdev_passthru.so.6.0 00:03:13.821 CC module/bdev/ftl/bdev_ftl.o 00:03:13.821 CC module/bdev/aio/bdev_aio.o 00:03:13.821 CC module/bdev/split/vbdev_split_rpc.o 00:03:13.821 SYMLINK libspdk_bdev_passthru.so 00:03:13.821 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:13.821 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:13.821 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:14.082 LIB libspdk_bdev_split.a 00:03:14.082 SO libspdk_bdev_split.so.6.0 00:03:14.082 CC module/bdev/nvme/nvme_rpc.o 00:03:14.082 LIB libspdk_bdev_zone_block.a 00:03:14.082 LIB libspdk_bdev_ftl.a 00:03:14.082 SYMLINK libspdk_bdev_split.so 00:03:14.082 CC module/bdev/raid/concat.o 00:03:14.082 SO libspdk_bdev_zone_block.so.6.0 00:03:14.082 CC module/bdev/uring/bdev_uring_rpc.o 00:03:14.082 SO libspdk_bdev_ftl.so.6.0 00:03:14.082 CC module/bdev/aio/bdev_aio_rpc.o 00:03:14.082 SYMLINK libspdk_bdev_zone_block.so 00:03:14.082 SYMLINK libspdk_bdev_ftl.so 00:03:14.082 CC module/bdev/nvme/bdev_mdns_client.o 00:03:14.340 CC module/bdev/iscsi/bdev_iscsi.o 00:03:14.340 CC module/bdev/nvme/vbdev_opal.o 00:03:14.340 LIB libspdk_bdev_uring.a 00:03:14.340 SO libspdk_bdev_uring.so.6.0 00:03:14.340 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:14.340 LIB libspdk_bdev_aio.a 00:03:14.340 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:14.340 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:14.340 SO libspdk_bdev_aio.so.6.0 00:03:14.340 LIB libspdk_bdev_raid.a 00:03:14.340 SYMLINK libspdk_bdev_uring.so 00:03:14.340 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:14.340 SYMLINK libspdk_bdev_aio.so 00:03:14.340 SO libspdk_bdev_raid.so.6.0 00:03:14.340 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:14.597 SYMLINK libspdk_bdev_raid.so 00:03:14.597 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:14.597 LIB libspdk_bdev_iscsi.a 00:03:14.597 SO libspdk_bdev_iscsi.so.6.0 00:03:14.597 SYMLINK libspdk_bdev_iscsi.so 00:03:14.855 LIB libspdk_bdev_virtio.a 00:03:14.855 SO libspdk_bdev_virtio.so.6.0 00:03:15.113 SYMLINK libspdk_bdev_virtio.so 00:03:15.371 LIB libspdk_bdev_nvme.a 00:03:15.630 SO libspdk_bdev_nvme.so.7.0 00:03:15.630 SYMLINK libspdk_bdev_nvme.so 00:03:16.198 CC module/event/subsystems/sock/sock.o 00:03:16.198 CC module/event/subsystems/fsdev/fsdev.o 00:03:16.198 CC module/event/subsystems/scheduler/scheduler.o 00:03:16.198 CC module/event/subsystems/keyring/keyring.o 00:03:16.198 CC module/event/subsystems/vmd/vmd.o 00:03:16.198 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:16.198 CC module/event/subsystems/iobuf/iobuf.o 00:03:16.198 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:16.198 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:16.198 LIB libspdk_event_fsdev.a 00:03:16.198 LIB libspdk_event_keyring.a 00:03:16.198 LIB libspdk_event_scheduler.a 00:03:16.198 LIB libspdk_event_sock.a 00:03:16.198 LIB libspdk_event_vhost_blk.a 00:03:16.198 LIB libspdk_event_iobuf.a 00:03:16.198 SO libspdk_event_keyring.so.1.0 00:03:16.198 SO libspdk_event_fsdev.so.1.0 00:03:16.198 LIB libspdk_event_vmd.a 00:03:16.198 SO libspdk_event_scheduler.so.4.0 00:03:16.198 SO libspdk_event_sock.so.5.0 00:03:16.198 SO libspdk_event_vhost_blk.so.3.0 00:03:16.457 SO libspdk_event_iobuf.so.3.0 00:03:16.457 SO libspdk_event_vmd.so.6.0 00:03:16.457 SYMLINK libspdk_event_scheduler.so 00:03:16.457 SYMLINK libspdk_event_keyring.so 00:03:16.457 SYMLINK libspdk_event_fsdev.so 00:03:16.457 SYMLINK libspdk_event_sock.so 00:03:16.457 SYMLINK libspdk_event_vhost_blk.so 00:03:16.457 SYMLINK libspdk_event_iobuf.so 00:03:16.457 SYMLINK libspdk_event_vmd.so 00:03:16.719 CC module/event/subsystems/accel/accel.o 00:03:16.719 LIB libspdk_event_accel.a 00:03:16.978 SO libspdk_event_accel.so.6.0 00:03:16.978 SYMLINK libspdk_event_accel.so 00:03:17.236 CC module/event/subsystems/bdev/bdev.o 00:03:17.494 LIB libspdk_event_bdev.a 00:03:17.494 SO libspdk_event_bdev.so.6.0 00:03:17.494 SYMLINK libspdk_event_bdev.so 00:03:17.752 CC module/event/subsystems/ublk/ublk.o 00:03:17.752 CC module/event/subsystems/scsi/scsi.o 00:03:17.752 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:17.752 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:17.752 CC module/event/subsystems/nbd/nbd.o 00:03:18.010 LIB libspdk_event_ublk.a 00:03:18.010 LIB libspdk_event_nbd.a 00:03:18.010 LIB libspdk_event_scsi.a 00:03:18.010 SO libspdk_event_ublk.so.3.0 00:03:18.010 SO libspdk_event_nbd.so.6.0 00:03:18.010 SO libspdk_event_scsi.so.6.0 00:03:18.010 SYMLINK libspdk_event_ublk.so 00:03:18.010 SYMLINK libspdk_event_scsi.so 00:03:18.010 SYMLINK libspdk_event_nbd.so 00:03:18.010 LIB libspdk_event_nvmf.a 00:03:18.010 SO libspdk_event_nvmf.so.6.0 00:03:18.010 SYMLINK libspdk_event_nvmf.so 00:03:18.268 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:18.268 CC module/event/subsystems/iscsi/iscsi.o 00:03:18.527 LIB libspdk_event_vhost_scsi.a 00:03:18.527 LIB libspdk_event_iscsi.a 00:03:18.527 SO libspdk_event_vhost_scsi.so.3.0 00:03:18.527 SO libspdk_event_iscsi.so.6.0 00:03:18.527 SYMLINK libspdk_event_vhost_scsi.so 00:03:18.527 SYMLINK libspdk_event_iscsi.so 00:03:18.784 SO libspdk.so.6.0 00:03:18.784 SYMLINK libspdk.so 00:03:19.041 CC app/spdk_lspci/spdk_lspci.o 00:03:19.041 CC app/trace_record/trace_record.o 00:03:19.041 CXX app/trace/trace.o 00:03:19.041 CC app/spdk_nvme_identify/identify.o 00:03:19.041 CC app/spdk_nvme_perf/perf.o 00:03:19.041 CC app/iscsi_tgt/iscsi_tgt.o 00:03:19.041 CC app/nvmf_tgt/nvmf_main.o 00:03:19.041 CC app/spdk_tgt/spdk_tgt.o 00:03:19.041 CC examples/util/zipf/zipf.o 00:03:19.041 CC test/thread/poller_perf/poller_perf.o 00:03:19.041 LINK spdk_lspci 00:03:19.298 LINK iscsi_tgt 00:03:19.298 LINK nvmf_tgt 00:03:19.298 LINK poller_perf 00:03:19.298 LINK zipf 00:03:19.298 LINK spdk_tgt 00:03:19.298 LINK spdk_trace_record 00:03:19.298 LINK spdk_trace 00:03:19.556 CC examples/ioat/perf/perf.o 00:03:19.556 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:19.556 CC app/spdk_nvme_discover/discovery_aer.o 00:03:19.556 CC app/spdk_top/spdk_top.o 00:03:19.556 CC test/dma/test_dma/test_dma.o 00:03:19.556 CC app/spdk_dd/spdk_dd.o 00:03:19.814 CC examples/thread/thread/thread_ex.o 00:03:19.814 LINK ioat_perf 00:03:19.814 CC app/fio/nvme/fio_plugin.o 00:03:19.814 LINK interrupt_tgt 00:03:19.814 LINK spdk_nvme_discover 00:03:19.814 LINK spdk_nvme_identify 00:03:19.814 LINK spdk_nvme_perf 00:03:20.072 CC examples/ioat/verify/verify.o 00:03:20.072 LINK thread 00:03:20.072 TEST_HEADER include/spdk/accel.h 00:03:20.072 TEST_HEADER include/spdk/accel_module.h 00:03:20.072 LINK spdk_dd 00:03:20.072 TEST_HEADER include/spdk/assert.h 00:03:20.072 TEST_HEADER include/spdk/barrier.h 00:03:20.072 TEST_HEADER include/spdk/base64.h 00:03:20.072 TEST_HEADER include/spdk/bdev.h 00:03:20.072 TEST_HEADER include/spdk/bdev_module.h 00:03:20.072 TEST_HEADER include/spdk/bdev_zone.h 00:03:20.072 TEST_HEADER include/spdk/bit_array.h 00:03:20.072 TEST_HEADER include/spdk/bit_pool.h 00:03:20.072 TEST_HEADER include/spdk/blob_bdev.h 00:03:20.072 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:20.072 CC examples/sock/hello_world/hello_sock.o 00:03:20.072 TEST_HEADER include/spdk/blobfs.h 00:03:20.072 TEST_HEADER include/spdk/blob.h 00:03:20.072 TEST_HEADER include/spdk/conf.h 00:03:20.072 TEST_HEADER include/spdk/config.h 00:03:20.072 TEST_HEADER include/spdk/cpuset.h 00:03:20.072 TEST_HEADER include/spdk/crc16.h 00:03:20.072 TEST_HEADER include/spdk/crc32.h 00:03:20.072 TEST_HEADER include/spdk/crc64.h 00:03:20.072 TEST_HEADER include/spdk/dif.h 00:03:20.072 TEST_HEADER include/spdk/dma.h 00:03:20.072 TEST_HEADER include/spdk/endian.h 00:03:20.072 LINK test_dma 00:03:20.072 TEST_HEADER include/spdk/env_dpdk.h 00:03:20.072 TEST_HEADER include/spdk/env.h 00:03:20.072 TEST_HEADER include/spdk/event.h 00:03:20.072 TEST_HEADER include/spdk/fd_group.h 00:03:20.072 TEST_HEADER include/spdk/fd.h 00:03:20.072 TEST_HEADER include/spdk/file.h 00:03:20.072 TEST_HEADER include/spdk/fsdev.h 00:03:20.072 TEST_HEADER include/spdk/fsdev_module.h 00:03:20.072 TEST_HEADER include/spdk/ftl.h 00:03:20.072 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:20.072 TEST_HEADER include/spdk/gpt_spec.h 00:03:20.072 CC test/app/bdev_svc/bdev_svc.o 00:03:20.072 LINK verify 00:03:20.072 TEST_HEADER include/spdk/hexlify.h 00:03:20.072 TEST_HEADER include/spdk/histogram_data.h 00:03:20.331 TEST_HEADER include/spdk/idxd.h 00:03:20.331 TEST_HEADER include/spdk/idxd_spec.h 00:03:20.331 TEST_HEADER include/spdk/init.h 00:03:20.331 TEST_HEADER include/spdk/ioat.h 00:03:20.331 TEST_HEADER include/spdk/ioat_spec.h 00:03:20.331 TEST_HEADER include/spdk/iscsi_spec.h 00:03:20.331 TEST_HEADER include/spdk/json.h 00:03:20.331 TEST_HEADER include/spdk/jsonrpc.h 00:03:20.331 TEST_HEADER include/spdk/keyring.h 00:03:20.331 TEST_HEADER include/spdk/keyring_module.h 00:03:20.331 TEST_HEADER include/spdk/likely.h 00:03:20.331 TEST_HEADER include/spdk/log.h 00:03:20.331 TEST_HEADER include/spdk/lvol.h 00:03:20.331 TEST_HEADER include/spdk/md5.h 00:03:20.331 TEST_HEADER include/spdk/memory.h 00:03:20.331 TEST_HEADER include/spdk/mmio.h 00:03:20.331 TEST_HEADER include/spdk/nbd.h 00:03:20.331 TEST_HEADER include/spdk/net.h 00:03:20.331 TEST_HEADER include/spdk/notify.h 00:03:20.331 TEST_HEADER include/spdk/nvme.h 00:03:20.331 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:20.331 TEST_HEADER include/spdk/nvme_intel.h 00:03:20.331 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:20.331 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:20.331 TEST_HEADER include/spdk/nvme_spec.h 00:03:20.331 TEST_HEADER include/spdk/nvme_zns.h 00:03:20.331 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:20.331 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:20.331 TEST_HEADER include/spdk/nvmf.h 00:03:20.331 TEST_HEADER include/spdk/nvmf_spec.h 00:03:20.331 TEST_HEADER include/spdk/nvmf_transport.h 00:03:20.331 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:20.331 TEST_HEADER include/spdk/opal.h 00:03:20.331 TEST_HEADER include/spdk/opal_spec.h 00:03:20.331 TEST_HEADER include/spdk/pci_ids.h 00:03:20.331 TEST_HEADER include/spdk/pipe.h 00:03:20.331 TEST_HEADER include/spdk/queue.h 00:03:20.331 TEST_HEADER include/spdk/reduce.h 00:03:20.331 TEST_HEADER include/spdk/rpc.h 00:03:20.331 TEST_HEADER include/spdk/scheduler.h 00:03:20.331 TEST_HEADER include/spdk/scsi.h 00:03:20.331 TEST_HEADER include/spdk/scsi_spec.h 00:03:20.331 TEST_HEADER include/spdk/sock.h 00:03:20.331 TEST_HEADER include/spdk/stdinc.h 00:03:20.331 TEST_HEADER include/spdk/string.h 00:03:20.331 TEST_HEADER include/spdk/thread.h 00:03:20.331 TEST_HEADER include/spdk/trace.h 00:03:20.331 TEST_HEADER include/spdk/trace_parser.h 00:03:20.331 TEST_HEADER include/spdk/tree.h 00:03:20.331 TEST_HEADER include/spdk/ublk.h 00:03:20.331 LINK spdk_nvme 00:03:20.331 TEST_HEADER include/spdk/util.h 00:03:20.331 TEST_HEADER include/spdk/uuid.h 00:03:20.331 TEST_HEADER include/spdk/version.h 00:03:20.331 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:20.331 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:20.331 TEST_HEADER include/spdk/vhost.h 00:03:20.331 TEST_HEADER include/spdk/vmd.h 00:03:20.331 TEST_HEADER include/spdk/xor.h 00:03:20.331 TEST_HEADER include/spdk/zipf.h 00:03:20.331 CXX test/cpp_headers/accel.o 00:03:20.331 LINK hello_sock 00:03:20.331 LINK bdev_svc 00:03:20.331 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:20.331 LINK spdk_top 00:03:20.590 CC app/fio/bdev/fio_plugin.o 00:03:20.590 CXX test/cpp_headers/accel_module.o 00:03:20.590 CC examples/vmd/lsvmd/lsvmd.o 00:03:20.590 CC app/vhost/vhost.o 00:03:20.590 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:20.590 CC examples/vmd/led/led.o 00:03:20.590 LINK lsvmd 00:03:20.590 CC test/app/histogram_perf/histogram_perf.o 00:03:20.849 CC test/app/jsoncat/jsoncat.o 00:03:20.849 LINK nvme_fuzz 00:03:20.849 CXX test/cpp_headers/assert.o 00:03:20.849 LINK vhost 00:03:20.849 LINK led 00:03:20.849 LINK jsoncat 00:03:20.849 LINK histogram_perf 00:03:20.849 CXX test/cpp_headers/barrier.o 00:03:20.849 CC test/app/stub/stub.o 00:03:21.106 LINK vhost_fuzz 00:03:21.106 CXX test/cpp_headers/base64.o 00:03:21.106 LINK spdk_bdev 00:03:21.106 CC test/env/vtophys/vtophys.o 00:03:21.106 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:21.106 CC test/env/mem_callbacks/mem_callbacks.o 00:03:21.106 LINK stub 00:03:21.106 CC examples/idxd/perf/perf.o 00:03:21.106 CXX test/cpp_headers/bdev.o 00:03:21.364 LINK vtophys 00:03:21.364 LINK env_dpdk_post_init 00:03:21.364 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:21.364 CC test/event/event_perf/event_perf.o 00:03:21.364 CC test/nvme/aer/aer.o 00:03:21.364 CC test/nvme/reset/reset.o 00:03:21.364 CXX test/cpp_headers/bdev_module.o 00:03:21.623 LINK event_perf 00:03:21.623 LINK idxd_perf 00:03:21.623 CC test/nvme/sgl/sgl.o 00:03:21.623 CC test/nvme/e2edp/nvme_dp.o 00:03:21.623 LINK hello_fsdev 00:03:21.623 LINK aer 00:03:21.623 CXX test/cpp_headers/bdev_zone.o 00:03:21.623 LINK reset 00:03:21.881 CC test/event/reactor/reactor.o 00:03:21.881 LINK mem_callbacks 00:03:21.881 CC test/event/reactor_perf/reactor_perf.o 00:03:21.881 LINK sgl 00:03:21.881 LINK nvme_dp 00:03:21.881 CXX test/cpp_headers/bit_array.o 00:03:21.881 LINK reactor 00:03:21.881 CC test/event/app_repeat/app_repeat.o 00:03:21.881 LINK reactor_perf 00:03:21.881 LINK iscsi_fuzz 00:03:21.881 CC test/env/memory/memory_ut.o 00:03:22.141 CC test/event/scheduler/scheduler.o 00:03:22.141 CC examples/accel/perf/accel_perf.o 00:03:22.141 CXX test/cpp_headers/bit_pool.o 00:03:22.141 CC test/nvme/overhead/overhead.o 00:03:22.141 LINK app_repeat 00:03:22.141 CC test/rpc_client/rpc_client_test.o 00:03:22.141 CC test/env/pci/pci_ut.o 00:03:22.141 CXX test/cpp_headers/blob_bdev.o 00:03:22.400 LINK scheduler 00:03:22.400 CC test/accel/dif/dif.o 00:03:22.400 LINK rpc_client_test 00:03:22.400 LINK overhead 00:03:22.400 CC test/blobfs/mkfs/mkfs.o 00:03:22.400 CXX test/cpp_headers/blobfs_bdev.o 00:03:22.400 CXX test/cpp_headers/blobfs.o 00:03:22.671 LINK accel_perf 00:03:22.671 CXX test/cpp_headers/blob.o 00:03:22.671 CC test/lvol/esnap/esnap.o 00:03:22.671 CC test/nvme/err_injection/err_injection.o 00:03:22.671 LINK mkfs 00:03:22.671 LINK pci_ut 00:03:22.671 CXX test/cpp_headers/conf.o 00:03:22.941 LINK err_injection 00:03:22.941 CC test/nvme/startup/startup.o 00:03:22.941 CC examples/nvme/hello_world/hello_world.o 00:03:22.941 CC examples/blob/hello_world/hello_blob.o 00:03:22.941 CXX test/cpp_headers/config.o 00:03:22.941 CC examples/nvme/reconnect/reconnect.o 00:03:22.941 CXX test/cpp_headers/cpuset.o 00:03:22.941 CC test/nvme/reserve/reserve.o 00:03:22.941 LINK dif 00:03:22.941 LINK startup 00:03:22.941 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:23.199 CXX test/cpp_headers/crc16.o 00:03:23.199 LINK hello_blob 00:03:23.199 LINK hello_world 00:03:23.199 LINK reserve 00:03:23.199 CC examples/nvme/arbitration/arbitration.o 00:03:23.199 CC examples/nvme/hotplug/hotplug.o 00:03:23.199 LINK memory_ut 00:03:23.199 LINK reconnect 00:03:23.199 CXX test/cpp_headers/crc32.o 00:03:23.458 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:23.458 CC test/nvme/simple_copy/simple_copy.o 00:03:23.458 CXX test/cpp_headers/crc64.o 00:03:23.458 CC examples/blob/cli/blobcli.o 00:03:23.458 CXX test/cpp_headers/dif.o 00:03:23.458 LINK hotplug 00:03:23.458 CC examples/nvme/abort/abort.o 00:03:23.458 LINK nvme_manage 00:03:23.717 LINK cmb_copy 00:03:23.717 LINK arbitration 00:03:23.717 CXX test/cpp_headers/dma.o 00:03:23.717 LINK simple_copy 00:03:23.717 CC test/nvme/connect_stress/connect_stress.o 00:03:23.975 CC test/nvme/compliance/nvme_compliance.o 00:03:23.975 CC test/nvme/boot_partition/boot_partition.o 00:03:23.975 CXX test/cpp_headers/endian.o 00:03:23.975 CC test/nvme/fused_ordering/fused_ordering.o 00:03:23.975 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:23.975 LINK abort 00:03:23.975 LINK blobcli 00:03:23.975 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:23.975 LINK boot_partition 00:03:23.975 LINK connect_stress 00:03:23.975 CXX test/cpp_headers/env_dpdk.o 00:03:24.234 LINK pmr_persistence 00:03:24.234 LINK fused_ordering 00:03:24.234 LINK nvme_compliance 00:03:24.234 CXX test/cpp_headers/env.o 00:03:24.234 LINK doorbell_aers 00:03:24.234 CC test/nvme/fdp/fdp.o 00:03:24.234 CXX test/cpp_headers/event.o 00:03:24.234 CXX test/cpp_headers/fd_group.o 00:03:24.234 CC test/nvme/cuse/cuse.o 00:03:24.492 CC test/bdev/bdevio/bdevio.o 00:03:24.492 CXX test/cpp_headers/fd.o 00:03:24.492 CXX test/cpp_headers/file.o 00:03:24.492 CXX test/cpp_headers/fsdev.o 00:03:24.492 CXX test/cpp_headers/fsdev_module.o 00:03:24.492 CXX test/cpp_headers/ftl.o 00:03:24.492 CC examples/bdev/hello_world/hello_bdev.o 00:03:24.492 CXX test/cpp_headers/fuse_dispatcher.o 00:03:24.492 CXX test/cpp_headers/gpt_spec.o 00:03:24.750 CXX test/cpp_headers/hexlify.o 00:03:24.750 LINK fdp 00:03:24.750 CC examples/bdev/bdevperf/bdevperf.o 00:03:24.750 CXX test/cpp_headers/histogram_data.o 00:03:24.750 CXX test/cpp_headers/idxd.o 00:03:24.750 CXX test/cpp_headers/idxd_spec.o 00:03:24.750 LINK hello_bdev 00:03:24.750 LINK bdevio 00:03:24.750 CXX test/cpp_headers/init.o 00:03:24.750 CXX test/cpp_headers/ioat.o 00:03:25.009 CXX test/cpp_headers/ioat_spec.o 00:03:25.009 CXX test/cpp_headers/iscsi_spec.o 00:03:25.009 CXX test/cpp_headers/json.o 00:03:25.009 CXX test/cpp_headers/jsonrpc.o 00:03:25.009 CXX test/cpp_headers/keyring.o 00:03:25.009 CXX test/cpp_headers/keyring_module.o 00:03:25.009 CXX test/cpp_headers/likely.o 00:03:25.009 CXX test/cpp_headers/log.o 00:03:25.009 CXX test/cpp_headers/lvol.o 00:03:25.009 CXX test/cpp_headers/md5.o 00:03:25.009 CXX test/cpp_headers/memory.o 00:03:25.268 CXX test/cpp_headers/mmio.o 00:03:25.268 CXX test/cpp_headers/nbd.o 00:03:25.268 CXX test/cpp_headers/net.o 00:03:25.268 CXX test/cpp_headers/notify.o 00:03:25.268 CXX test/cpp_headers/nvme.o 00:03:25.268 CXX test/cpp_headers/nvme_intel.o 00:03:25.268 CXX test/cpp_headers/nvme_ocssd.o 00:03:25.268 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:25.268 CXX test/cpp_headers/nvme_spec.o 00:03:25.268 CXX test/cpp_headers/nvme_zns.o 00:03:25.268 CXX test/cpp_headers/nvmf_cmd.o 00:03:25.525 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:25.525 CXX test/cpp_headers/nvmf.o 00:03:25.525 CXX test/cpp_headers/nvmf_spec.o 00:03:25.525 CXX test/cpp_headers/nvmf_transport.o 00:03:25.525 CXX test/cpp_headers/opal.o 00:03:25.525 LINK bdevperf 00:03:25.525 CXX test/cpp_headers/opal_spec.o 00:03:25.525 CXX test/cpp_headers/pci_ids.o 00:03:25.525 CXX test/cpp_headers/pipe.o 00:03:25.526 CXX test/cpp_headers/queue.o 00:03:25.784 CXX test/cpp_headers/reduce.o 00:03:25.784 CXX test/cpp_headers/rpc.o 00:03:25.784 LINK cuse 00:03:25.784 CXX test/cpp_headers/scheduler.o 00:03:25.784 CXX test/cpp_headers/scsi.o 00:03:25.784 CXX test/cpp_headers/scsi_spec.o 00:03:25.784 CXX test/cpp_headers/sock.o 00:03:25.784 CXX test/cpp_headers/stdinc.o 00:03:25.784 CXX test/cpp_headers/string.o 00:03:25.784 CXX test/cpp_headers/thread.o 00:03:26.042 CXX test/cpp_headers/trace.o 00:03:26.042 CXX test/cpp_headers/trace_parser.o 00:03:26.042 CXX test/cpp_headers/tree.o 00:03:26.042 CXX test/cpp_headers/ublk.o 00:03:26.042 CXX test/cpp_headers/util.o 00:03:26.042 CC examples/nvmf/nvmf/nvmf.o 00:03:26.042 CXX test/cpp_headers/uuid.o 00:03:26.042 CXX test/cpp_headers/version.o 00:03:26.042 CXX test/cpp_headers/vfio_user_pci.o 00:03:26.042 CXX test/cpp_headers/vfio_user_spec.o 00:03:26.042 CXX test/cpp_headers/vhost.o 00:03:26.042 CXX test/cpp_headers/vmd.o 00:03:26.042 CXX test/cpp_headers/xor.o 00:03:26.042 CXX test/cpp_headers/zipf.o 00:03:26.300 LINK nvmf 00:03:27.676 LINK esnap 00:03:28.243 00:03:28.243 real 1m27.395s 00:03:28.243 user 8m17.060s 00:03:28.243 sys 1m32.312s 00:03:28.243 13:31:19 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:28.243 13:31:19 make -- common/autotest_common.sh@10 -- $ set +x 00:03:28.243 ************************************ 00:03:28.243 END TEST make 00:03:28.243 ************************************ 00:03:28.243 13:31:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:28.243 13:31:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:28.243 13:31:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:28.243 13:31:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.243 13:31:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:28.243 13:31:19 -- pm/common@44 -- $ pid=5293 00:03:28.243 13:31:19 -- pm/common@50 -- $ kill -TERM 5293 00:03:28.243 13:31:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.243 13:31:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:28.243 13:31:19 -- pm/common@44 -- $ pid=5295 00:03:28.243 13:31:19 -- pm/common@50 -- $ kill -TERM 5295 00:03:28.243 13:31:19 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:28.243 13:31:19 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:28.243 13:31:19 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:28.243 13:31:20 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:28.243 13:31:20 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:28.243 13:31:20 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:28.243 13:31:20 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:28.243 13:31:20 -- scripts/common.sh@336 -- # IFS=.-: 00:03:28.243 13:31:20 -- scripts/common.sh@336 -- # read -ra ver1 00:03:28.243 13:31:20 -- scripts/common.sh@337 -- # IFS=.-: 00:03:28.243 13:31:20 -- scripts/common.sh@337 -- # read -ra ver2 00:03:28.243 13:31:20 -- scripts/common.sh@338 -- # local 'op=<' 00:03:28.243 13:31:20 -- scripts/common.sh@340 -- # ver1_l=2 00:03:28.243 13:31:20 -- scripts/common.sh@341 -- # ver2_l=1 00:03:28.243 13:31:20 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:28.243 13:31:20 -- scripts/common.sh@344 -- # case "$op" in 00:03:28.243 13:31:20 -- scripts/common.sh@345 -- # : 1 00:03:28.243 13:31:20 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:28.243 13:31:20 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.243 13:31:20 -- scripts/common.sh@365 -- # decimal 1 00:03:28.243 13:31:20 -- scripts/common.sh@353 -- # local d=1 00:03:28.243 13:31:20 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:28.243 13:31:20 -- scripts/common.sh@355 -- # echo 1 00:03:28.243 13:31:20 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:28.243 13:31:20 -- scripts/common.sh@366 -- # decimal 2 00:03:28.243 13:31:20 -- scripts/common.sh@353 -- # local d=2 00:03:28.243 13:31:20 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:28.243 13:31:20 -- scripts/common.sh@355 -- # echo 2 00:03:28.243 13:31:20 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:28.243 13:31:20 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:28.243 13:31:20 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:28.243 13:31:20 -- scripts/common.sh@368 -- # return 0 00:03:28.243 13:31:20 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:28.243 13:31:20 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:28.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.243 --rc genhtml_branch_coverage=1 00:03:28.243 --rc genhtml_function_coverage=1 00:03:28.243 --rc genhtml_legend=1 00:03:28.243 --rc geninfo_all_blocks=1 00:03:28.243 --rc geninfo_unexecuted_blocks=1 00:03:28.243 00:03:28.243 ' 00:03:28.243 13:31:20 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:28.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.243 --rc genhtml_branch_coverage=1 00:03:28.243 --rc genhtml_function_coverage=1 00:03:28.243 --rc genhtml_legend=1 00:03:28.243 --rc geninfo_all_blocks=1 00:03:28.243 --rc geninfo_unexecuted_blocks=1 00:03:28.243 00:03:28.243 ' 00:03:28.243 13:31:20 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:28.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.243 --rc genhtml_branch_coverage=1 00:03:28.243 --rc genhtml_function_coverage=1 00:03:28.243 --rc genhtml_legend=1 00:03:28.243 --rc geninfo_all_blocks=1 00:03:28.243 --rc geninfo_unexecuted_blocks=1 00:03:28.243 00:03:28.243 ' 00:03:28.243 13:31:20 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:28.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.243 --rc genhtml_branch_coverage=1 00:03:28.243 --rc genhtml_function_coverage=1 00:03:28.243 --rc genhtml_legend=1 00:03:28.243 --rc geninfo_all_blocks=1 00:03:28.243 --rc geninfo_unexecuted_blocks=1 00:03:28.243 00:03:28.243 ' 00:03:28.243 13:31:20 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:28.243 13:31:20 -- nvmf/common.sh@7 -- # uname -s 00:03:28.243 13:31:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:28.243 13:31:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:28.243 13:31:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:28.243 13:31:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:28.243 13:31:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:28.243 13:31:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:28.243 13:31:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:28.243 13:31:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:28.243 13:31:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:28.243 13:31:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:28.243 13:31:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:03:28.244 13:31:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:03:28.244 13:31:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:28.244 13:31:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:28.244 13:31:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:28.244 13:31:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:28.244 13:31:20 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:28.244 13:31:20 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:28.244 13:31:20 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:28.244 13:31:20 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:28.244 13:31:20 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:28.244 13:31:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.244 13:31:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.244 13:31:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.244 13:31:20 -- paths/export.sh@5 -- # export PATH 00:03:28.244 13:31:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.244 13:31:20 -- nvmf/common.sh@51 -- # : 0 00:03:28.244 13:31:20 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:28.244 13:31:20 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:28.244 13:31:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:28.244 13:31:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:28.244 13:31:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:28.244 13:31:20 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:28.244 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:28.244 13:31:20 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:28.244 13:31:20 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:28.244 13:31:20 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:28.244 13:31:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:28.244 13:31:20 -- spdk/autotest.sh@32 -- # uname -s 00:03:28.244 13:31:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:28.244 13:31:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:28.244 13:31:20 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.244 13:31:20 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:28.244 13:31:20 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.244 13:31:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:28.502 13:31:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:28.502 13:31:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:28.502 13:31:20 -- spdk/autotest.sh@48 -- # udevadm_pid=54386 00:03:28.502 13:31:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:28.502 13:31:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:28.502 13:31:20 -- pm/common@17 -- # local monitor 00:03:28.502 13:31:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.502 13:31:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.502 13:31:20 -- pm/common@25 -- # sleep 1 00:03:28.502 13:31:20 -- pm/common@21 -- # date +%s 00:03:28.502 13:31:20 -- pm/common@21 -- # date +%s 00:03:28.502 13:31:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727789480 00:03:28.502 13:31:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727789480 00:03:28.502 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727789480_collect-cpu-load.pm.log 00:03:28.502 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727789480_collect-vmstat.pm.log 00:03:29.439 13:31:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:29.439 13:31:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:29.439 13:31:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:29.439 13:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:29.439 13:31:21 -- spdk/autotest.sh@59 -- # create_test_list 00:03:29.439 13:31:21 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:29.439 13:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:29.439 13:31:21 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:29.439 13:31:21 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:29.439 13:31:21 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:29.439 13:31:21 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:29.439 13:31:21 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:29.439 13:31:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:29.439 13:31:21 -- common/autotest_common.sh@1455 -- # uname 00:03:29.439 13:31:21 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:29.439 13:31:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:29.439 13:31:21 -- common/autotest_common.sh@1475 -- # uname 00:03:29.439 13:31:21 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:29.439 13:31:21 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:29.439 13:31:21 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:29.439 lcov: LCOV version 1.15 00:03:29.439 13:31:21 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:44.317 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:44.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:59.192 13:31:50 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:59.192 13:31:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:59.192 13:31:50 -- common/autotest_common.sh@10 -- # set +x 00:03:59.192 13:31:50 -- spdk/autotest.sh@78 -- # rm -f 00:03:59.192 13:31:50 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.451 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.451 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:59.451 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:59.710 13:31:51 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:59.710 13:31:51 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:59.710 13:31:51 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:59.710 13:31:51 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:59.710 13:31:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:59.710 13:31:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:59.710 13:31:51 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:59.710 13:31:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.710 13:31:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:59.710 13:31:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:59.710 13:31:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:59.710 13:31:51 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:59.710 13:31:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:59.710 13:31:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:59.710 13:31:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:59.710 13:31:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:03:59.710 13:31:51 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:03:59.710 13:31:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:59.710 13:31:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:59.710 13:31:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:59.710 13:31:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:03:59.710 13:31:51 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:03:59.710 13:31:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:59.710 13:31:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:59.710 13:31:51 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:59.710 13:31:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.710 13:31:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:59.710 13:31:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:59.710 13:31:51 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:59.710 13:31:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:59.710 No valid GPT data, bailing 00:03:59.710 13:31:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:59.710 13:31:51 -- scripts/common.sh@394 -- # pt= 00:03:59.710 13:31:51 -- scripts/common.sh@395 -- # return 1 00:03:59.710 13:31:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:59.710 1+0 records in 00:03:59.710 1+0 records out 00:03:59.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456487 s, 230 MB/s 00:03:59.710 13:31:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.710 13:31:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:59.710 13:31:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:59.710 13:31:51 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:59.710 13:31:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:59.710 No valid GPT data, bailing 00:03:59.710 13:31:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:59.710 13:31:51 -- scripts/common.sh@394 -- # pt= 00:03:59.710 13:31:51 -- scripts/common.sh@395 -- # return 1 00:03:59.710 13:31:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:59.710 1+0 records in 00:03:59.710 1+0 records out 00:03:59.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00448328 s, 234 MB/s 00:03:59.710 13:31:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.710 13:31:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:59.710 13:31:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:59.710 13:31:51 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:59.710 13:31:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:59.710 No valid GPT data, bailing 00:03:59.710 13:31:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:59.968 13:31:51 -- scripts/common.sh@394 -- # pt= 00:03:59.968 13:31:51 -- scripts/common.sh@395 -- # return 1 00:03:59.968 13:31:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:59.968 1+0 records in 00:03:59.968 1+0 records out 00:03:59.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00451017 s, 232 MB/s 00:03:59.969 13:31:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.969 13:31:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:59.969 13:31:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:59.969 13:31:51 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:59.969 13:31:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:59.969 No valid GPT data, bailing 00:03:59.969 13:31:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:59.969 13:31:51 -- scripts/common.sh@394 -- # pt= 00:03:59.969 13:31:51 -- scripts/common.sh@395 -- # return 1 00:03:59.969 13:31:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:59.969 1+0 records in 00:03:59.969 1+0 records out 00:03:59.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435112 s, 241 MB/s 00:03:59.969 13:31:51 -- spdk/autotest.sh@105 -- # sync 00:03:59.969 13:31:51 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:59.969 13:31:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:59.969 13:31:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:02.501 13:31:53 -- spdk/autotest.sh@111 -- # uname -s 00:04:02.501 13:31:53 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:02.501 13:31:53 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:02.501 13:31:53 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:02.760 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.760 Hugepages 00:04:02.760 node hugesize free / total 00:04:02.760 node0 1048576kB 0 / 0 00:04:02.760 node0 2048kB 0 / 0 00:04:02.760 00:04:02.760 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:02.760 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:03.019 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:03.019 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:03.019 13:31:54 -- spdk/autotest.sh@117 -- # uname -s 00:04:03.019 13:31:54 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:03.019 13:31:54 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:03.019 13:31:54 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.844 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.844 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.844 13:31:55 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:04.780 13:31:56 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:04.780 13:31:56 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:04.780 13:31:56 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:04.780 13:31:56 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:04.780 13:31:56 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:04.780 13:31:56 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:04.780 13:31:56 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.780 13:31:56 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:04.780 13:31:56 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:05.039 13:31:56 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:05.039 13:31:56 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:05.039 13:31:56 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.298 Waiting for block devices as requested 00:04:05.298 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.557 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.557 13:31:57 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:05.557 13:31:57 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:05.557 13:31:57 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:05.557 13:31:57 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:05.557 13:31:57 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:05.557 13:31:57 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:05.557 13:31:57 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:05.557 13:31:57 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:05.557 13:31:57 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:05.557 13:31:57 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:05.557 13:31:57 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:05.557 13:31:57 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:05.557 13:31:57 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:05.557 13:31:57 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:05.557 13:31:57 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:05.557 13:31:57 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:05.557 13:31:57 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:05.557 13:31:57 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:05.557 13:31:57 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:05.557 13:31:57 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:05.557 13:31:57 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:05.557 13:31:57 -- common/autotest_common.sh@1541 -- # continue 00:04:05.557 13:31:57 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:05.557 13:31:57 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:05.557 13:31:57 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:05.557 13:31:57 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:05.557 13:31:57 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:05.557 13:31:57 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:05.557 13:31:57 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:05.557 13:31:57 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:05.557 13:31:57 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:05.557 13:31:57 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:05.557 13:31:57 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:05.557 13:31:57 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:05.557 13:31:57 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:05.557 13:31:57 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:05.557 13:31:57 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:05.557 13:31:57 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:05.557 13:31:57 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:05.557 13:31:57 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:05.557 13:31:57 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:05.557 13:31:57 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:05.557 13:31:57 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:05.557 13:31:57 -- common/autotest_common.sh@1541 -- # continue 00:04:05.557 13:31:57 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:05.557 13:31:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:05.557 13:31:57 -- common/autotest_common.sh@10 -- # set +x 00:04:05.557 13:31:57 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:05.557 13:31:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:05.557 13:31:57 -- common/autotest_common.sh@10 -- # set +x 00:04:05.557 13:31:57 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.494 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.494 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:06.494 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:06.494 13:31:58 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:06.494 13:31:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:06.494 13:31:58 -- common/autotest_common.sh@10 -- # set +x 00:04:06.494 13:31:58 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:06.494 13:31:58 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:06.494 13:31:58 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:06.494 13:31:58 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:06.494 13:31:58 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:06.494 13:31:58 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:06.494 13:31:58 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:06.494 13:31:58 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:06.494 13:31:58 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:06.494 13:31:58 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:06.494 13:31:58 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.494 13:31:58 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:06.494 13:31:58 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:06.494 13:31:58 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:06.494 13:31:58 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:06.494 13:31:58 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:06.494 13:31:58 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:06.494 13:31:58 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:06.494 13:31:58 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:06.494 13:31:58 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:06.494 13:31:58 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:06.494 13:31:58 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:06.494 13:31:58 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:06.494 13:31:58 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:06.494 13:31:58 -- common/autotest_common.sh@1570 -- # return 0 00:04:06.494 13:31:58 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:06.494 13:31:58 -- common/autotest_common.sh@1578 -- # return 0 00:04:06.494 13:31:58 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:06.494 13:31:58 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:06.494 13:31:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:06.494 13:31:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:06.494 13:31:58 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:06.494 13:31:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.494 13:31:58 -- common/autotest_common.sh@10 -- # set +x 00:04:06.494 13:31:58 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:06.494 13:31:58 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:06.494 13:31:58 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:06.494 13:31:58 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:06.494 13:31:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.494 13:31:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.494 13:31:58 -- common/autotest_common.sh@10 -- # set +x 00:04:06.494 ************************************ 00:04:06.494 START TEST env 00:04:06.494 ************************************ 00:04:06.494 13:31:58 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:06.753 * Looking for test storage... 00:04:06.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:06.753 13:31:58 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:06.753 13:31:58 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:06.753 13:31:58 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:06.753 13:31:58 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:06.753 13:31:58 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.753 13:31:58 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.753 13:31:58 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.753 13:31:58 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.753 13:31:58 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.753 13:31:58 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.753 13:31:58 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.753 13:31:58 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.753 13:31:58 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.753 13:31:58 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.753 13:31:58 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.753 13:31:58 env -- scripts/common.sh@344 -- # case "$op" in 00:04:06.753 13:31:58 env -- scripts/common.sh@345 -- # : 1 00:04:06.753 13:31:58 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.753 13:31:58 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.753 13:31:58 env -- scripts/common.sh@365 -- # decimal 1 00:04:06.753 13:31:58 env -- scripts/common.sh@353 -- # local d=1 00:04:06.753 13:31:58 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.753 13:31:58 env -- scripts/common.sh@355 -- # echo 1 00:04:06.753 13:31:58 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.753 13:31:58 env -- scripts/common.sh@366 -- # decimal 2 00:04:06.753 13:31:58 env -- scripts/common.sh@353 -- # local d=2 00:04:06.753 13:31:58 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.753 13:31:58 env -- scripts/common.sh@355 -- # echo 2 00:04:06.753 13:31:58 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.753 13:31:58 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.753 13:31:58 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.753 13:31:58 env -- scripts/common.sh@368 -- # return 0 00:04:06.753 13:31:58 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.753 13:31:58 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:06.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.753 --rc genhtml_branch_coverage=1 00:04:06.753 --rc genhtml_function_coverage=1 00:04:06.753 --rc genhtml_legend=1 00:04:06.753 --rc geninfo_all_blocks=1 00:04:06.753 --rc geninfo_unexecuted_blocks=1 00:04:06.753 00:04:06.753 ' 00:04:06.753 13:31:58 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:06.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.753 --rc genhtml_branch_coverage=1 00:04:06.753 --rc genhtml_function_coverage=1 00:04:06.753 --rc genhtml_legend=1 00:04:06.753 --rc geninfo_all_blocks=1 00:04:06.753 --rc geninfo_unexecuted_blocks=1 00:04:06.753 00:04:06.753 ' 00:04:06.753 13:31:58 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:06.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.753 --rc genhtml_branch_coverage=1 00:04:06.753 --rc genhtml_function_coverage=1 00:04:06.753 --rc genhtml_legend=1 00:04:06.753 --rc geninfo_all_blocks=1 00:04:06.753 --rc geninfo_unexecuted_blocks=1 00:04:06.753 00:04:06.753 ' 00:04:06.753 13:31:58 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:06.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.753 --rc genhtml_branch_coverage=1 00:04:06.753 --rc genhtml_function_coverage=1 00:04:06.753 --rc genhtml_legend=1 00:04:06.753 --rc geninfo_all_blocks=1 00:04:06.753 --rc geninfo_unexecuted_blocks=1 00:04:06.753 00:04:06.753 ' 00:04:06.753 13:31:58 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:06.753 13:31:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.753 13:31:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.754 13:31:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.754 ************************************ 00:04:06.754 START TEST env_memory 00:04:06.754 ************************************ 00:04:06.754 13:31:58 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:06.754 00:04:06.754 00:04:06.754 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.754 http://cunit.sourceforge.net/ 00:04:06.754 00:04:06.754 00:04:06.754 Suite: memory 00:04:06.754 Test: alloc and free memory map ...[2024-10-01 13:31:58.590216] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:06.754 passed 00:04:07.013 Test: mem map translation ...[2024-10-01 13:31:58.620826] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:07.013 [2024-10-01 13:31:58.620866] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:07.013 [2024-10-01 13:31:58.620921] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:07.013 [2024-10-01 13:31:58.620932] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:07.013 passed 00:04:07.013 Test: mem map registration ...[2024-10-01 13:31:58.684519] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:07.013 [2024-10-01 13:31:58.684560] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:07.013 passed 00:04:07.013 Test: mem map adjacent registrations ...passed 00:04:07.013 00:04:07.013 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.013 suites 1 1 n/a 0 0 00:04:07.013 tests 4 4 4 0 0 00:04:07.013 asserts 152 152 152 0 n/a 00:04:07.013 00:04:07.013 Elapsed time = 0.212 seconds 00:04:07.013 00:04:07.013 real 0m0.229s 00:04:07.013 user 0m0.211s 00:04:07.013 sys 0m0.014s 00:04:07.013 13:31:58 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.013 13:31:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:07.013 ************************************ 00:04:07.013 END TEST env_memory 00:04:07.013 ************************************ 00:04:07.013 13:31:58 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:07.013 13:31:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.013 13:31:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.013 13:31:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.013 ************************************ 00:04:07.013 START TEST env_vtophys 00:04:07.013 ************************************ 00:04:07.013 13:31:58 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:07.013 EAL: lib.eal log level changed from notice to debug 00:04:07.013 EAL: Detected lcore 0 as core 0 on socket 0 00:04:07.013 EAL: Detected lcore 1 as core 0 on socket 0 00:04:07.013 EAL: Detected lcore 2 as core 0 on socket 0 00:04:07.013 EAL: Detected lcore 3 as core 0 on socket 0 00:04:07.013 EAL: Detected lcore 4 as core 0 on socket 0 00:04:07.013 EAL: Detected lcore 5 as core 0 on socket 0 00:04:07.013 EAL: Detected lcore 6 as core 0 on socket 0 00:04:07.013 EAL: Detected lcore 7 as core 0 on socket 0 00:04:07.013 EAL: Detected lcore 8 as core 0 on socket 0 00:04:07.013 EAL: Detected lcore 9 as core 0 on socket 0 00:04:07.013 EAL: Maximum logical cores by configuration: 128 00:04:07.013 EAL: Detected CPU lcores: 10 00:04:07.013 EAL: Detected NUMA nodes: 1 00:04:07.013 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:07.013 EAL: Detected shared linkage of DPDK 00:04:07.013 EAL: No shared files mode enabled, IPC will be disabled 00:04:07.013 EAL: Selected IOVA mode 'PA' 00:04:07.013 EAL: Probing VFIO support... 00:04:07.013 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:07.013 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:07.013 EAL: Ask a virtual area of 0x2e000 bytes 00:04:07.013 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:07.013 EAL: Setting up physically contiguous memory... 00:04:07.013 EAL: Setting maximum number of open files to 524288 00:04:07.013 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:07.013 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:07.013 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.013 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:07.013 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.013 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.013 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:07.013 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:07.013 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.013 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:07.013 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.013 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.013 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:07.013 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:07.013 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.013 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:07.013 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.014 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.014 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:07.014 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:07.014 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.014 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:07.014 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.014 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.014 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:07.014 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:07.014 EAL: Hugepages will be freed exactly as allocated. 00:04:07.014 EAL: No shared files mode enabled, IPC is disabled 00:04:07.014 EAL: No shared files mode enabled, IPC is disabled 00:04:07.273 EAL: TSC frequency is ~2200000 KHz 00:04:07.273 EAL: Main lcore 0 is ready (tid=7f791c44da00;cpuset=[0]) 00:04:07.273 EAL: Trying to obtain current memory policy. 00:04:07.273 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.273 EAL: Restoring previous memory policy: 0 00:04:07.273 EAL: request: mp_malloc_sync 00:04:07.273 EAL: No shared files mode enabled, IPC is disabled 00:04:07.273 EAL: Heap on socket 0 was expanded by 2MB 00:04:07.273 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:07.273 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:07.273 EAL: Mem event callback 'spdk:(nil)' registered 00:04:07.273 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:07.273 00:04:07.273 00:04:07.273 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.273 http://cunit.sourceforge.net/ 00:04:07.273 00:04:07.273 00:04:07.273 Suite: components_suite 00:04:07.273 Test: vtophys_malloc_test ...passed 00:04:07.273 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:07.273 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.273 EAL: Restoring previous memory policy: 4 00:04:07.273 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.273 EAL: request: mp_malloc_sync 00:04:07.273 EAL: No shared files mode enabled, IPC is disabled 00:04:07.273 EAL: Heap on socket 0 was expanded by 4MB 00:04:07.273 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.273 EAL: request: mp_malloc_sync 00:04:07.273 EAL: No shared files mode enabled, IPC is disabled 00:04:07.273 EAL: Heap on socket 0 was shrunk by 4MB 00:04:07.273 EAL: Trying to obtain current memory policy. 00:04:07.273 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.273 EAL: Restoring previous memory policy: 4 00:04:07.273 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.273 EAL: request: mp_malloc_sync 00:04:07.273 EAL: No shared files mode enabled, IPC is disabled 00:04:07.273 EAL: Heap on socket 0 was expanded by 6MB 00:04:07.273 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.273 EAL: request: mp_malloc_sync 00:04:07.273 EAL: No shared files mode enabled, IPC is disabled 00:04:07.273 EAL: Heap on socket 0 was shrunk by 6MB 00:04:07.273 EAL: Trying to obtain current memory policy. 00:04:07.273 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.273 EAL: Restoring previous memory policy: 4 00:04:07.273 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.273 EAL: request: mp_malloc_sync 00:04:07.273 EAL: No shared files mode enabled, IPC is disabled 00:04:07.273 EAL: Heap on socket 0 was expanded by 10MB 00:04:07.273 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.273 EAL: request: mp_malloc_sync 00:04:07.273 EAL: No shared files mode enabled, IPC is disabled 00:04:07.273 EAL: Heap on socket 0 was shrunk by 10MB 00:04:07.274 EAL: Trying to obtain current memory policy. 00:04:07.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.274 EAL: Restoring previous memory policy: 4 00:04:07.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.274 EAL: request: mp_malloc_sync 00:04:07.274 EAL: No shared files mode enabled, IPC is disabled 00:04:07.274 EAL: Heap on socket 0 was expanded by 18MB 00:04:07.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.274 EAL: request: mp_malloc_sync 00:04:07.274 EAL: No shared files mode enabled, IPC is disabled 00:04:07.274 EAL: Heap on socket 0 was shrunk by 18MB 00:04:07.274 EAL: Trying to obtain current memory policy. 00:04:07.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.274 EAL: Restoring previous memory policy: 4 00:04:07.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.274 EAL: request: mp_malloc_sync 00:04:07.274 EAL: No shared files mode enabled, IPC is disabled 00:04:07.274 EAL: Heap on socket 0 was expanded by 34MB 00:04:07.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.274 EAL: request: mp_malloc_sync 00:04:07.274 EAL: No shared files mode enabled, IPC is disabled 00:04:07.274 EAL: Heap on socket 0 was shrunk by 34MB 00:04:07.274 EAL: Trying to obtain current memory policy. 00:04:07.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.274 EAL: Restoring previous memory policy: 4 00:04:07.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.274 EAL: request: mp_malloc_sync 00:04:07.274 EAL: No shared files mode enabled, IPC is disabled 00:04:07.274 EAL: Heap on socket 0 was expanded by 66MB 00:04:07.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.274 EAL: request: mp_malloc_sync 00:04:07.274 EAL: No shared files mode enabled, IPC is disabled 00:04:07.274 EAL: Heap on socket 0 was shrunk by 66MB 00:04:07.274 EAL: Trying to obtain current memory policy. 00:04:07.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.274 EAL: Restoring previous memory policy: 4 00:04:07.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.274 EAL: request: mp_malloc_sync 00:04:07.274 EAL: No shared files mode enabled, IPC is disabled 00:04:07.274 EAL: Heap on socket 0 was expanded by 130MB 00:04:07.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.274 EAL: request: mp_malloc_sync 00:04:07.274 EAL: No shared files mode enabled, IPC is disabled 00:04:07.274 EAL: Heap on socket 0 was shrunk by 130MB 00:04:07.274 EAL: Trying to obtain current memory policy. 00:04:07.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.274 EAL: Restoring previous memory policy: 4 00:04:07.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.274 EAL: request: mp_malloc_sync 00:04:07.274 EAL: No shared files mode enabled, IPC is disabled 00:04:07.274 EAL: Heap on socket 0 was expanded by 258MB 00:04:07.594 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.594 EAL: request: mp_malloc_sync 00:04:07.594 EAL: No shared files mode enabled, IPC is disabled 00:04:07.594 EAL: Heap on socket 0 was shrunk by 258MB 00:04:07.594 EAL: Trying to obtain current memory policy. 00:04:07.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.594 EAL: Restoring previous memory policy: 4 00:04:07.594 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.594 EAL: request: mp_malloc_sync 00:04:07.594 EAL: No shared files mode enabled, IPC is disabled 00:04:07.594 EAL: Heap on socket 0 was expanded by 514MB 00:04:07.594 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.594 EAL: request: mp_malloc_sync 00:04:07.594 EAL: No shared files mode enabled, IPC is disabled 00:04:07.594 EAL: Heap on socket 0 was shrunk by 514MB 00:04:07.594 EAL: Trying to obtain current memory policy. 00:04:07.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.877 EAL: Restoring previous memory policy: 4 00:04:07.877 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.877 EAL: request: mp_malloc_sync 00:04:07.877 EAL: No shared files mode enabled, IPC is disabled 00:04:07.877 EAL: Heap on socket 0 was expanded by 1026MB 00:04:07.877 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.136 passed 00:04:08.136 00:04:08.136 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.136 suites 1 1 n/a 0 0 00:04:08.136 tests 2 2 2 0 0 00:04:08.136 asserts 5442 5442 5442 0 n/a 00:04:08.136 00:04:08.136 Elapsed time = 0.725 seconds 00:04:08.136 EAL: request: mp_malloc_sync 00:04:08.136 EAL: No shared files mode enabled, IPC is disabled 00:04:08.136 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:08.136 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.136 EAL: request: mp_malloc_sync 00:04:08.136 EAL: No shared files mode enabled, IPC is disabled 00:04:08.136 EAL: Heap on socket 0 was shrunk by 2MB 00:04:08.136 EAL: No shared files mode enabled, IPC is disabled 00:04:08.136 EAL: No shared files mode enabled, IPC is disabled 00:04:08.136 EAL: No shared files mode enabled, IPC is disabled 00:04:08.136 00:04:08.136 real 0m0.925s 00:04:08.136 user 0m0.467s 00:04:08.136 sys 0m0.325s 00:04:08.136 13:31:59 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.136 13:31:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:08.136 ************************************ 00:04:08.136 END TEST env_vtophys 00:04:08.136 ************************************ 00:04:08.136 13:31:59 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:08.136 13:31:59 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.136 13:31:59 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.136 13:31:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.136 ************************************ 00:04:08.136 START TEST env_pci 00:04:08.136 ************************************ 00:04:08.136 13:31:59 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:08.136 00:04:08.136 00:04:08.136 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.136 http://cunit.sourceforge.net/ 00:04:08.136 00:04:08.136 00:04:08.136 Suite: pci 00:04:08.136 Test: pci_hook ...[2024-10-01 13:31:59.816287] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56582 has claimed it 00:04:08.136 passed 00:04:08.136 00:04:08.136 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.136 suites 1 1 n/a 0 0 00:04:08.136 tests 1 1 1 0 0 00:04:08.136 asserts 25 25 25 0 n/a 00:04:08.136 00:04:08.136 Elapsed time = 0.002 seconds 00:04:08.136 EAL: Cannot find device (10000:00:01.0) 00:04:08.136 EAL: Failed to attach device on primary process 00:04:08.136 00:04:08.136 real 0m0.021s 00:04:08.136 user 0m0.009s 00:04:08.136 sys 0m0.011s 00:04:08.136 13:31:59 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.136 13:31:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:08.136 ************************************ 00:04:08.136 END TEST env_pci 00:04:08.136 ************************************ 00:04:08.136 13:31:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:08.136 13:31:59 env -- env/env.sh@15 -- # uname 00:04:08.136 13:31:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:08.136 13:31:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:08.136 13:31:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:08.136 13:31:59 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:08.136 13:31:59 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.136 13:31:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.136 ************************************ 00:04:08.136 START TEST env_dpdk_post_init 00:04:08.136 ************************************ 00:04:08.136 13:31:59 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:08.136 EAL: Detected CPU lcores: 10 00:04:08.136 EAL: Detected NUMA nodes: 1 00:04:08.136 EAL: Detected shared linkage of DPDK 00:04:08.136 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:08.136 EAL: Selected IOVA mode 'PA' 00:04:08.395 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:08.395 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:08.395 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:08.395 Starting DPDK initialization... 00:04:08.395 Starting SPDK post initialization... 00:04:08.395 SPDK NVMe probe 00:04:08.395 Attaching to 0000:00:10.0 00:04:08.395 Attaching to 0000:00:11.0 00:04:08.395 Attached to 0000:00:10.0 00:04:08.395 Attached to 0000:00:11.0 00:04:08.395 Cleaning up... 00:04:08.395 00:04:08.395 real 0m0.172s 00:04:08.395 user 0m0.041s 00:04:08.395 sys 0m0.031s 00:04:08.395 13:32:00 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.395 13:32:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:08.395 ************************************ 00:04:08.395 END TEST env_dpdk_post_init 00:04:08.395 ************************************ 00:04:08.395 13:32:00 env -- env/env.sh@26 -- # uname 00:04:08.395 13:32:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:08.395 13:32:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:08.395 13:32:00 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.395 13:32:00 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.395 13:32:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.395 ************************************ 00:04:08.395 START TEST env_mem_callbacks 00:04:08.395 ************************************ 00:04:08.395 13:32:00 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:08.395 EAL: Detected CPU lcores: 10 00:04:08.395 EAL: Detected NUMA nodes: 1 00:04:08.395 EAL: Detected shared linkage of DPDK 00:04:08.395 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:08.395 EAL: Selected IOVA mode 'PA' 00:04:08.395 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:08.395 00:04:08.395 00:04:08.395 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.395 http://cunit.sourceforge.net/ 00:04:08.395 00:04:08.395 00:04:08.395 Suite: memory 00:04:08.395 Test: test ... 00:04:08.395 register 0x200000200000 2097152 00:04:08.395 malloc 3145728 00:04:08.395 register 0x200000400000 4194304 00:04:08.395 buf 0x200000500000 len 3145728 PASSED 00:04:08.395 malloc 64 00:04:08.395 buf 0x2000004fff40 len 64 PASSED 00:04:08.395 malloc 4194304 00:04:08.395 register 0x200000800000 6291456 00:04:08.395 buf 0x200000a00000 len 4194304 PASSED 00:04:08.395 free 0x200000500000 3145728 00:04:08.395 free 0x2000004fff40 64 00:04:08.395 unregister 0x200000400000 4194304 PASSED 00:04:08.395 free 0x200000a00000 4194304 00:04:08.395 unregister 0x200000800000 6291456 PASSED 00:04:08.395 malloc 8388608 00:04:08.395 register 0x200000400000 10485760 00:04:08.395 buf 0x200000600000 len 8388608 PASSED 00:04:08.395 free 0x200000600000 8388608 00:04:08.395 unregister 0x200000400000 10485760 PASSED 00:04:08.395 passed 00:04:08.395 00:04:08.395 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.395 suites 1 1 n/a 0 0 00:04:08.395 tests 1 1 1 0 0 00:04:08.395 asserts 15 15 15 0 n/a 00:04:08.395 00:04:08.395 Elapsed time = 0.008 seconds 00:04:08.395 00:04:08.395 real 0m0.140s 00:04:08.395 user 0m0.013s 00:04:08.395 sys 0m0.026s 00:04:08.395 13:32:00 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.395 ************************************ 00:04:08.395 END TEST env_mem_callbacks 00:04:08.395 13:32:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:08.395 ************************************ 00:04:08.654 00:04:08.654 real 0m1.955s 00:04:08.654 user 0m0.946s 00:04:08.654 sys 0m0.653s 00:04:08.654 13:32:00 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.654 13:32:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.654 ************************************ 00:04:08.654 END TEST env 00:04:08.654 ************************************ 00:04:08.654 13:32:00 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:08.654 13:32:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.654 13:32:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.654 13:32:00 -- common/autotest_common.sh@10 -- # set +x 00:04:08.654 ************************************ 00:04:08.654 START TEST rpc 00:04:08.654 ************************************ 00:04:08.654 13:32:00 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:08.654 * Looking for test storage... 00:04:08.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.654 13:32:00 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:08.654 13:32:00 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:08.654 13:32:00 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:08.654 13:32:00 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:08.654 13:32:00 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.913 13:32:00 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.913 13:32:00 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.913 13:32:00 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.913 13:32:00 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.913 13:32:00 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.913 13:32:00 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.913 13:32:00 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.913 13:32:00 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.913 13:32:00 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.913 13:32:00 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.913 13:32:00 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:08.913 13:32:00 rpc -- scripts/common.sh@345 -- # : 1 00:04:08.913 13:32:00 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.913 13:32:00 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.913 13:32:00 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:08.913 13:32:00 rpc -- scripts/common.sh@353 -- # local d=1 00:04:08.913 13:32:00 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.913 13:32:00 rpc -- scripts/common.sh@355 -- # echo 1 00:04:08.913 13:32:00 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.913 13:32:00 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:08.913 13:32:00 rpc -- scripts/common.sh@353 -- # local d=2 00:04:08.913 13:32:00 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.913 13:32:00 rpc -- scripts/common.sh@355 -- # echo 2 00:04:08.913 13:32:00 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.913 13:32:00 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.913 13:32:00 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.913 13:32:00 rpc -- scripts/common.sh@368 -- # return 0 00:04:08.913 13:32:00 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.913 13:32:00 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:08.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.913 --rc genhtml_branch_coverage=1 00:04:08.913 --rc genhtml_function_coverage=1 00:04:08.913 --rc genhtml_legend=1 00:04:08.913 --rc geninfo_all_blocks=1 00:04:08.913 --rc geninfo_unexecuted_blocks=1 00:04:08.913 00:04:08.913 ' 00:04:08.913 13:32:00 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:08.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.913 --rc genhtml_branch_coverage=1 00:04:08.913 --rc genhtml_function_coverage=1 00:04:08.913 --rc genhtml_legend=1 00:04:08.913 --rc geninfo_all_blocks=1 00:04:08.913 --rc geninfo_unexecuted_blocks=1 00:04:08.913 00:04:08.913 ' 00:04:08.913 13:32:00 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:08.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.913 --rc genhtml_branch_coverage=1 00:04:08.913 --rc genhtml_function_coverage=1 00:04:08.913 --rc genhtml_legend=1 00:04:08.913 --rc geninfo_all_blocks=1 00:04:08.913 --rc geninfo_unexecuted_blocks=1 00:04:08.913 00:04:08.913 ' 00:04:08.913 13:32:00 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:08.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.913 --rc genhtml_branch_coverage=1 00:04:08.913 --rc genhtml_function_coverage=1 00:04:08.913 --rc genhtml_legend=1 00:04:08.913 --rc geninfo_all_blocks=1 00:04:08.913 --rc geninfo_unexecuted_blocks=1 00:04:08.913 00:04:08.913 ' 00:04:08.913 13:32:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56705 00:04:08.913 13:32:00 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:08.913 13:32:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.913 13:32:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56705 00:04:08.913 13:32:00 rpc -- common/autotest_common.sh@831 -- # '[' -z 56705 ']' 00:04:08.913 13:32:00 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.913 13:32:00 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:08.913 13:32:00 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.913 13:32:00 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:08.913 13:32:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.913 [2024-10-01 13:32:00.601936] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:08.913 [2024-10-01 13:32:00.602060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56705 ] 00:04:08.913 [2024-10-01 13:32:00.743319] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.172 [2024-10-01 13:32:00.812608] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:09.172 [2024-10-01 13:32:00.812671] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56705' to capture a snapshot of events at runtime. 00:04:09.172 [2024-10-01 13:32:00.812685] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:09.172 [2024-10-01 13:32:00.812695] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:09.172 [2024-10-01 13:32:00.812704] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56705 for offline analysis/debug. 00:04:09.172 [2024-10-01 13:32:00.812735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.172 [2024-10-01 13:32:00.857281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:09.172 13:32:00 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:09.172 13:32:00 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:09.172 13:32:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:09.172 13:32:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:09.172 13:32:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:09.172 13:32:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:09.172 13:32:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.172 13:32:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.172 13:32:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.172 ************************************ 00:04:09.172 START TEST rpc_integrity 00:04:09.172 ************************************ 00:04:09.172 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:09.172 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:09.172 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.172 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.172 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.172 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:09.172 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:09.430 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.431 { 00:04:09.431 "name": "Malloc0", 00:04:09.431 "aliases": [ 00:04:09.431 "5346ee7d-63af-4ef1-be6c-08577dee39e7" 00:04:09.431 ], 00:04:09.431 "product_name": "Malloc disk", 00:04:09.431 "block_size": 512, 00:04:09.431 "num_blocks": 16384, 00:04:09.431 "uuid": "5346ee7d-63af-4ef1-be6c-08577dee39e7", 00:04:09.431 "assigned_rate_limits": { 00:04:09.431 "rw_ios_per_sec": 0, 00:04:09.431 "rw_mbytes_per_sec": 0, 00:04:09.431 "r_mbytes_per_sec": 0, 00:04:09.431 "w_mbytes_per_sec": 0 00:04:09.431 }, 00:04:09.431 "claimed": false, 00:04:09.431 "zoned": false, 00:04:09.431 "supported_io_types": { 00:04:09.431 "read": true, 00:04:09.431 "write": true, 00:04:09.431 "unmap": true, 00:04:09.431 "flush": true, 00:04:09.431 "reset": true, 00:04:09.431 "nvme_admin": false, 00:04:09.431 "nvme_io": false, 00:04:09.431 "nvme_io_md": false, 00:04:09.431 "write_zeroes": true, 00:04:09.431 "zcopy": true, 00:04:09.431 "get_zone_info": false, 00:04:09.431 "zone_management": false, 00:04:09.431 "zone_append": false, 00:04:09.431 "compare": false, 00:04:09.431 "compare_and_write": false, 00:04:09.431 "abort": true, 00:04:09.431 "seek_hole": false, 00:04:09.431 "seek_data": false, 00:04:09.431 "copy": true, 00:04:09.431 "nvme_iov_md": false 00:04:09.431 }, 00:04:09.431 "memory_domains": [ 00:04:09.431 { 00:04:09.431 "dma_device_id": "system", 00:04:09.431 "dma_device_type": 1 00:04:09.431 }, 00:04:09.431 { 00:04:09.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.431 "dma_device_type": 2 00:04:09.431 } 00:04:09.431 ], 00:04:09.431 "driver_specific": {} 00:04:09.431 } 00:04:09.431 ]' 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.431 [2024-10-01 13:32:01.162112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:09.431 [2024-10-01 13:32:01.162184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.431 [2024-10-01 13:32:01.162215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17bd120 00:04:09.431 [2024-10-01 13:32:01.162223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.431 [2024-10-01 13:32:01.163739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.431 [2024-10-01 13:32:01.163789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.431 Passthru0 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.431 { 00:04:09.431 "name": "Malloc0", 00:04:09.431 "aliases": [ 00:04:09.431 "5346ee7d-63af-4ef1-be6c-08577dee39e7" 00:04:09.431 ], 00:04:09.431 "product_name": "Malloc disk", 00:04:09.431 "block_size": 512, 00:04:09.431 "num_blocks": 16384, 00:04:09.431 "uuid": "5346ee7d-63af-4ef1-be6c-08577dee39e7", 00:04:09.431 "assigned_rate_limits": { 00:04:09.431 "rw_ios_per_sec": 0, 00:04:09.431 "rw_mbytes_per_sec": 0, 00:04:09.431 "r_mbytes_per_sec": 0, 00:04:09.431 "w_mbytes_per_sec": 0 00:04:09.431 }, 00:04:09.431 "claimed": true, 00:04:09.431 "claim_type": "exclusive_write", 00:04:09.431 "zoned": false, 00:04:09.431 "supported_io_types": { 00:04:09.431 "read": true, 00:04:09.431 "write": true, 00:04:09.431 "unmap": true, 00:04:09.431 "flush": true, 00:04:09.431 "reset": true, 00:04:09.431 "nvme_admin": false, 00:04:09.431 "nvme_io": false, 00:04:09.431 "nvme_io_md": false, 00:04:09.431 "write_zeroes": true, 00:04:09.431 "zcopy": true, 00:04:09.431 "get_zone_info": false, 00:04:09.431 "zone_management": false, 00:04:09.431 "zone_append": false, 00:04:09.431 "compare": false, 00:04:09.431 "compare_and_write": false, 00:04:09.431 "abort": true, 00:04:09.431 "seek_hole": false, 00:04:09.431 "seek_data": false, 00:04:09.431 "copy": true, 00:04:09.431 "nvme_iov_md": false 00:04:09.431 }, 00:04:09.431 "memory_domains": [ 00:04:09.431 { 00:04:09.431 "dma_device_id": "system", 00:04:09.431 "dma_device_type": 1 00:04:09.431 }, 00:04:09.431 { 00:04:09.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.431 "dma_device_type": 2 00:04:09.431 } 00:04:09.431 ], 00:04:09.431 "driver_specific": {} 00:04:09.431 }, 00:04:09.431 { 00:04:09.431 "name": "Passthru0", 00:04:09.431 "aliases": [ 00:04:09.431 "3a883227-c794-55fd-851f-347d13585fdb" 00:04:09.431 ], 00:04:09.431 "product_name": "passthru", 00:04:09.431 "block_size": 512, 00:04:09.431 "num_blocks": 16384, 00:04:09.431 "uuid": "3a883227-c794-55fd-851f-347d13585fdb", 00:04:09.431 "assigned_rate_limits": { 00:04:09.431 "rw_ios_per_sec": 0, 00:04:09.431 "rw_mbytes_per_sec": 0, 00:04:09.431 "r_mbytes_per_sec": 0, 00:04:09.431 "w_mbytes_per_sec": 0 00:04:09.431 }, 00:04:09.431 "claimed": false, 00:04:09.431 "zoned": false, 00:04:09.431 "supported_io_types": { 00:04:09.431 "read": true, 00:04:09.431 "write": true, 00:04:09.431 "unmap": true, 00:04:09.431 "flush": true, 00:04:09.431 "reset": true, 00:04:09.431 "nvme_admin": false, 00:04:09.431 "nvme_io": false, 00:04:09.431 "nvme_io_md": false, 00:04:09.431 "write_zeroes": true, 00:04:09.431 "zcopy": true, 00:04:09.431 "get_zone_info": false, 00:04:09.431 "zone_management": false, 00:04:09.431 "zone_append": false, 00:04:09.431 "compare": false, 00:04:09.431 "compare_and_write": false, 00:04:09.431 "abort": true, 00:04:09.431 "seek_hole": false, 00:04:09.431 "seek_data": false, 00:04:09.431 "copy": true, 00:04:09.431 "nvme_iov_md": false 00:04:09.431 }, 00:04:09.431 "memory_domains": [ 00:04:09.431 { 00:04:09.431 "dma_device_id": "system", 00:04:09.431 "dma_device_type": 1 00:04:09.431 }, 00:04:09.431 { 00:04:09.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.431 "dma_device_type": 2 00:04:09.431 } 00:04:09.431 ], 00:04:09.431 "driver_specific": { 00:04:09.431 "passthru": { 00:04:09.431 "name": "Passthru0", 00:04:09.431 "base_bdev_name": "Malloc0" 00:04:09.431 } 00:04:09.431 } 00:04:09.431 } 00:04:09.431 ]' 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.431 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.431 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:09.690 13:32:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.690 00:04:09.690 real 0m0.320s 00:04:09.690 user 0m0.216s 00:04:09.690 sys 0m0.035s 00:04:09.690 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.690 13:32:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.690 ************************************ 00:04:09.690 END TEST rpc_integrity 00:04:09.690 ************************************ 00:04:09.690 13:32:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:09.690 13:32:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.690 13:32:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.690 13:32:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.690 ************************************ 00:04:09.690 START TEST rpc_plugins 00:04:09.690 ************************************ 00:04:09.690 13:32:01 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:09.690 13:32:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:09.690 13:32:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.690 13:32:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.690 13:32:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.690 13:32:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:09.690 13:32:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:09.690 13:32:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.690 13:32:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.690 13:32:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.690 13:32:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:09.690 { 00:04:09.690 "name": "Malloc1", 00:04:09.690 "aliases": [ 00:04:09.690 "8bd8b9f2-dbc6-47ed-bd8b-80f8eb96ddd4" 00:04:09.690 ], 00:04:09.690 "product_name": "Malloc disk", 00:04:09.690 "block_size": 4096, 00:04:09.690 "num_blocks": 256, 00:04:09.690 "uuid": "8bd8b9f2-dbc6-47ed-bd8b-80f8eb96ddd4", 00:04:09.690 "assigned_rate_limits": { 00:04:09.690 "rw_ios_per_sec": 0, 00:04:09.690 "rw_mbytes_per_sec": 0, 00:04:09.690 "r_mbytes_per_sec": 0, 00:04:09.690 "w_mbytes_per_sec": 0 00:04:09.690 }, 00:04:09.690 "claimed": false, 00:04:09.690 "zoned": false, 00:04:09.690 "supported_io_types": { 00:04:09.690 "read": true, 00:04:09.690 "write": true, 00:04:09.690 "unmap": true, 00:04:09.690 "flush": true, 00:04:09.690 "reset": true, 00:04:09.690 "nvme_admin": false, 00:04:09.690 "nvme_io": false, 00:04:09.690 "nvme_io_md": false, 00:04:09.690 "write_zeroes": true, 00:04:09.690 "zcopy": true, 00:04:09.690 "get_zone_info": false, 00:04:09.690 "zone_management": false, 00:04:09.690 "zone_append": false, 00:04:09.690 "compare": false, 00:04:09.690 "compare_and_write": false, 00:04:09.690 "abort": true, 00:04:09.690 "seek_hole": false, 00:04:09.690 "seek_data": false, 00:04:09.690 "copy": true, 00:04:09.690 "nvme_iov_md": false 00:04:09.690 }, 00:04:09.690 "memory_domains": [ 00:04:09.690 { 00:04:09.690 "dma_device_id": "system", 00:04:09.690 "dma_device_type": 1 00:04:09.690 }, 00:04:09.690 { 00:04:09.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.690 "dma_device_type": 2 00:04:09.690 } 00:04:09.690 ], 00:04:09.690 "driver_specific": {} 00:04:09.690 } 00:04:09.690 ]' 00:04:09.690 13:32:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:09.690 13:32:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:09.690 13:32:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:09.690 13:32:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.690 13:32:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.690 13:32:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.690 13:32:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:09.690 13:32:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.690 13:32:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.690 13:32:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.690 13:32:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:09.690 13:32:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:09.949 13:32:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:09.949 00:04:09.949 real 0m0.167s 00:04:09.949 user 0m0.104s 00:04:09.949 sys 0m0.022s 00:04:09.949 13:32:01 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.949 13:32:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.949 ************************************ 00:04:09.949 END TEST rpc_plugins 00:04:09.949 ************************************ 00:04:09.949 13:32:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:09.949 13:32:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.949 13:32:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.949 13:32:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.949 ************************************ 00:04:09.949 START TEST rpc_trace_cmd_test 00:04:09.949 ************************************ 00:04:09.949 13:32:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:09.949 13:32:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:09.949 13:32:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:09.949 13:32:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.949 13:32:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:09.949 13:32:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.949 13:32:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:09.949 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56705", 00:04:09.949 "tpoint_group_mask": "0x8", 00:04:09.949 "iscsi_conn": { 00:04:09.949 "mask": "0x2", 00:04:09.949 "tpoint_mask": "0x0" 00:04:09.949 }, 00:04:09.949 "scsi": { 00:04:09.949 "mask": "0x4", 00:04:09.949 "tpoint_mask": "0x0" 00:04:09.949 }, 00:04:09.949 "bdev": { 00:04:09.949 "mask": "0x8", 00:04:09.949 "tpoint_mask": "0xffffffffffffffff" 00:04:09.949 }, 00:04:09.949 "nvmf_rdma": { 00:04:09.949 "mask": "0x10", 00:04:09.949 "tpoint_mask": "0x0" 00:04:09.949 }, 00:04:09.949 "nvmf_tcp": { 00:04:09.949 "mask": "0x20", 00:04:09.949 "tpoint_mask": "0x0" 00:04:09.949 }, 00:04:09.949 "ftl": { 00:04:09.949 "mask": "0x40", 00:04:09.949 "tpoint_mask": "0x0" 00:04:09.949 }, 00:04:09.949 "blobfs": { 00:04:09.949 "mask": "0x80", 00:04:09.949 "tpoint_mask": "0x0" 00:04:09.949 }, 00:04:09.949 "dsa": { 00:04:09.949 "mask": "0x200", 00:04:09.949 "tpoint_mask": "0x0" 00:04:09.949 }, 00:04:09.949 "thread": { 00:04:09.949 "mask": "0x400", 00:04:09.949 "tpoint_mask": "0x0" 00:04:09.949 }, 00:04:09.949 "nvme_pcie": { 00:04:09.949 "mask": "0x800", 00:04:09.949 "tpoint_mask": "0x0" 00:04:09.949 }, 00:04:09.949 "iaa": { 00:04:09.949 "mask": "0x1000", 00:04:09.949 "tpoint_mask": "0x0" 00:04:09.949 }, 00:04:09.949 "nvme_tcp": { 00:04:09.949 "mask": "0x2000", 00:04:09.949 "tpoint_mask": "0x0" 00:04:09.949 }, 00:04:09.949 "bdev_nvme": { 00:04:09.949 "mask": "0x4000", 00:04:09.949 "tpoint_mask": "0x0" 00:04:09.949 }, 00:04:09.949 "sock": { 00:04:09.949 "mask": "0x8000", 00:04:09.949 "tpoint_mask": "0x0" 00:04:09.949 }, 00:04:09.949 "blob": { 00:04:09.949 "mask": "0x10000", 00:04:09.949 "tpoint_mask": "0x0" 00:04:09.949 }, 00:04:09.949 "bdev_raid": { 00:04:09.949 "mask": "0x20000", 00:04:09.949 "tpoint_mask": "0x0" 00:04:09.949 } 00:04:09.949 }' 00:04:09.949 13:32:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:09.949 13:32:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:09.949 13:32:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:09.949 13:32:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:09.949 13:32:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:09.949 13:32:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:09.949 13:32:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:10.208 13:32:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:10.208 13:32:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:10.208 13:32:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:10.208 00:04:10.208 real 0m0.306s 00:04:10.208 user 0m0.267s 00:04:10.208 sys 0m0.023s 00:04:10.208 13:32:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.208 13:32:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:10.208 ************************************ 00:04:10.208 END TEST rpc_trace_cmd_test 00:04:10.208 ************************************ 00:04:10.208 13:32:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:10.208 13:32:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:10.208 13:32:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:10.208 13:32:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.208 13:32:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.208 13:32:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.208 ************************************ 00:04:10.208 START TEST rpc_daemon_integrity 00:04:10.208 ************************************ 00:04:10.208 13:32:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:10.208 13:32:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:10.208 13:32:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.208 13:32:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.208 13:32:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.208 13:32:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:10.208 13:32:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:10.208 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:10.208 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:10.208 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.208 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.208 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.208 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:10.208 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:10.208 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.208 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.208 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.208 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:10.208 { 00:04:10.208 "name": "Malloc2", 00:04:10.208 "aliases": [ 00:04:10.208 "177ee775-08cf-4a6d-b063-1654a0611ef9" 00:04:10.208 ], 00:04:10.208 "product_name": "Malloc disk", 00:04:10.208 "block_size": 512, 00:04:10.208 "num_blocks": 16384, 00:04:10.208 "uuid": "177ee775-08cf-4a6d-b063-1654a0611ef9", 00:04:10.208 "assigned_rate_limits": { 00:04:10.208 "rw_ios_per_sec": 0, 00:04:10.208 "rw_mbytes_per_sec": 0, 00:04:10.208 "r_mbytes_per_sec": 0, 00:04:10.208 "w_mbytes_per_sec": 0 00:04:10.208 }, 00:04:10.208 "claimed": false, 00:04:10.208 "zoned": false, 00:04:10.208 "supported_io_types": { 00:04:10.208 "read": true, 00:04:10.208 "write": true, 00:04:10.208 "unmap": true, 00:04:10.208 "flush": true, 00:04:10.208 "reset": true, 00:04:10.208 "nvme_admin": false, 00:04:10.208 "nvme_io": false, 00:04:10.208 "nvme_io_md": false, 00:04:10.208 "write_zeroes": true, 00:04:10.208 "zcopy": true, 00:04:10.208 "get_zone_info": false, 00:04:10.208 "zone_management": false, 00:04:10.208 "zone_append": false, 00:04:10.208 "compare": false, 00:04:10.208 "compare_and_write": false, 00:04:10.208 "abort": true, 00:04:10.208 "seek_hole": false, 00:04:10.208 "seek_data": false, 00:04:10.208 "copy": true, 00:04:10.208 "nvme_iov_md": false 00:04:10.208 }, 00:04:10.208 "memory_domains": [ 00:04:10.208 { 00:04:10.208 "dma_device_id": "system", 00:04:10.208 "dma_device_type": 1 00:04:10.208 }, 00:04:10.208 { 00:04:10.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.208 "dma_device_type": 2 00:04:10.208 } 00:04:10.208 ], 00:04:10.208 "driver_specific": {} 00:04:10.208 } 00:04:10.208 ]' 00:04:10.208 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.468 [2024-10-01 13:32:02.114518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:10.468 [2024-10-01 13:32:02.114604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:10.468 [2024-10-01 13:32:02.114623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17cba80 00:04:10.468 [2024-10-01 13:32:02.114632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:10.468 [2024-10-01 13:32:02.116496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:10.468 [2024-10-01 13:32:02.116585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:10.468 Passthru0 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:10.468 { 00:04:10.468 "name": "Malloc2", 00:04:10.468 "aliases": [ 00:04:10.468 "177ee775-08cf-4a6d-b063-1654a0611ef9" 00:04:10.468 ], 00:04:10.468 "product_name": "Malloc disk", 00:04:10.468 "block_size": 512, 00:04:10.468 "num_blocks": 16384, 00:04:10.468 "uuid": "177ee775-08cf-4a6d-b063-1654a0611ef9", 00:04:10.468 "assigned_rate_limits": { 00:04:10.468 "rw_ios_per_sec": 0, 00:04:10.468 "rw_mbytes_per_sec": 0, 00:04:10.468 "r_mbytes_per_sec": 0, 00:04:10.468 "w_mbytes_per_sec": 0 00:04:10.468 }, 00:04:10.468 "claimed": true, 00:04:10.468 "claim_type": "exclusive_write", 00:04:10.468 "zoned": false, 00:04:10.468 "supported_io_types": { 00:04:10.468 "read": true, 00:04:10.468 "write": true, 00:04:10.468 "unmap": true, 00:04:10.468 "flush": true, 00:04:10.468 "reset": true, 00:04:10.468 "nvme_admin": false, 00:04:10.468 "nvme_io": false, 00:04:10.468 "nvme_io_md": false, 00:04:10.468 "write_zeroes": true, 00:04:10.468 "zcopy": true, 00:04:10.468 "get_zone_info": false, 00:04:10.468 "zone_management": false, 00:04:10.468 "zone_append": false, 00:04:10.468 "compare": false, 00:04:10.468 "compare_and_write": false, 00:04:10.468 "abort": true, 00:04:10.468 "seek_hole": false, 00:04:10.468 "seek_data": false, 00:04:10.468 "copy": true, 00:04:10.468 "nvme_iov_md": false 00:04:10.468 }, 00:04:10.468 "memory_domains": [ 00:04:10.468 { 00:04:10.468 "dma_device_id": "system", 00:04:10.468 "dma_device_type": 1 00:04:10.468 }, 00:04:10.468 { 00:04:10.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.468 "dma_device_type": 2 00:04:10.468 } 00:04:10.468 ], 00:04:10.468 "driver_specific": {} 00:04:10.468 }, 00:04:10.468 { 00:04:10.468 "name": "Passthru0", 00:04:10.468 "aliases": [ 00:04:10.468 "3d0d5369-67bf-5d55-b93a-f7fb1af3c0b8" 00:04:10.468 ], 00:04:10.468 "product_name": "passthru", 00:04:10.468 "block_size": 512, 00:04:10.468 "num_blocks": 16384, 00:04:10.468 "uuid": "3d0d5369-67bf-5d55-b93a-f7fb1af3c0b8", 00:04:10.468 "assigned_rate_limits": { 00:04:10.468 "rw_ios_per_sec": 0, 00:04:10.468 "rw_mbytes_per_sec": 0, 00:04:10.468 "r_mbytes_per_sec": 0, 00:04:10.468 "w_mbytes_per_sec": 0 00:04:10.468 }, 00:04:10.468 "claimed": false, 00:04:10.468 "zoned": false, 00:04:10.468 "supported_io_types": { 00:04:10.468 "read": true, 00:04:10.468 "write": true, 00:04:10.468 "unmap": true, 00:04:10.468 "flush": true, 00:04:10.468 "reset": true, 00:04:10.468 "nvme_admin": false, 00:04:10.468 "nvme_io": false, 00:04:10.468 "nvme_io_md": false, 00:04:10.468 "write_zeroes": true, 00:04:10.468 "zcopy": true, 00:04:10.468 "get_zone_info": false, 00:04:10.468 "zone_management": false, 00:04:10.468 "zone_append": false, 00:04:10.468 "compare": false, 00:04:10.468 "compare_and_write": false, 00:04:10.468 "abort": true, 00:04:10.468 "seek_hole": false, 00:04:10.468 "seek_data": false, 00:04:10.468 "copy": true, 00:04:10.468 "nvme_iov_md": false 00:04:10.468 }, 00:04:10.468 "memory_domains": [ 00:04:10.468 { 00:04:10.468 "dma_device_id": "system", 00:04:10.468 "dma_device_type": 1 00:04:10.468 }, 00:04:10.468 { 00:04:10.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.468 "dma_device_type": 2 00:04:10.468 } 00:04:10.468 ], 00:04:10.468 "driver_specific": { 00:04:10.468 "passthru": { 00:04:10.468 "name": "Passthru0", 00:04:10.468 "base_bdev_name": "Malloc2" 00:04:10.468 } 00:04:10.468 } 00:04:10.468 } 00:04:10.468 ]' 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:10.468 00:04:10.468 real 0m0.317s 00:04:10.468 user 0m0.216s 00:04:10.468 sys 0m0.041s 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.468 13:32:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.468 ************************************ 00:04:10.468 END TEST rpc_daemon_integrity 00:04:10.468 ************************************ 00:04:10.468 13:32:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:10.468 13:32:02 rpc -- rpc/rpc.sh@84 -- # killprocess 56705 00:04:10.468 13:32:02 rpc -- common/autotest_common.sh@950 -- # '[' -z 56705 ']' 00:04:10.468 13:32:02 rpc -- common/autotest_common.sh@954 -- # kill -0 56705 00:04:10.468 13:32:02 rpc -- common/autotest_common.sh@955 -- # uname 00:04:10.728 13:32:02 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:10.728 13:32:02 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56705 00:04:10.728 13:32:02 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:10.728 13:32:02 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:10.728 killing process with pid 56705 00:04:10.728 13:32:02 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56705' 00:04:10.728 13:32:02 rpc -- common/autotest_common.sh@969 -- # kill 56705 00:04:10.728 13:32:02 rpc -- common/autotest_common.sh@974 -- # wait 56705 00:04:10.987 00:04:10.987 real 0m2.292s 00:04:10.987 user 0m3.058s 00:04:10.987 sys 0m0.589s 00:04:10.987 13:32:02 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.987 13:32:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.987 ************************************ 00:04:10.987 END TEST rpc 00:04:10.987 ************************************ 00:04:10.987 13:32:02 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:10.987 13:32:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.987 13:32:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.987 13:32:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.987 ************************************ 00:04:10.987 START TEST skip_rpc 00:04:10.987 ************************************ 00:04:10.987 13:32:02 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:10.987 * Looking for test storage... 00:04:10.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:10.987 13:32:02 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:10.987 13:32:02 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:10.987 13:32:02 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:11.246 13:32:02 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.246 13:32:02 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:11.246 13:32:02 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.246 13:32:02 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:11.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.246 --rc genhtml_branch_coverage=1 00:04:11.246 --rc genhtml_function_coverage=1 00:04:11.246 --rc genhtml_legend=1 00:04:11.246 --rc geninfo_all_blocks=1 00:04:11.246 --rc geninfo_unexecuted_blocks=1 00:04:11.246 00:04:11.246 ' 00:04:11.246 13:32:02 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:11.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.246 --rc genhtml_branch_coverage=1 00:04:11.246 --rc genhtml_function_coverage=1 00:04:11.246 --rc genhtml_legend=1 00:04:11.246 --rc geninfo_all_blocks=1 00:04:11.246 --rc geninfo_unexecuted_blocks=1 00:04:11.246 00:04:11.246 ' 00:04:11.246 13:32:02 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:11.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.246 --rc genhtml_branch_coverage=1 00:04:11.246 --rc genhtml_function_coverage=1 00:04:11.246 --rc genhtml_legend=1 00:04:11.246 --rc geninfo_all_blocks=1 00:04:11.246 --rc geninfo_unexecuted_blocks=1 00:04:11.246 00:04:11.246 ' 00:04:11.246 13:32:02 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:11.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.246 --rc genhtml_branch_coverage=1 00:04:11.246 --rc genhtml_function_coverage=1 00:04:11.246 --rc genhtml_legend=1 00:04:11.246 --rc geninfo_all_blocks=1 00:04:11.246 --rc geninfo_unexecuted_blocks=1 00:04:11.246 00:04:11.246 ' 00:04:11.246 13:32:02 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:11.246 13:32:02 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:11.246 13:32:02 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:11.246 13:32:02 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.246 13:32:02 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.246 13:32:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.246 ************************************ 00:04:11.246 START TEST skip_rpc 00:04:11.246 ************************************ 00:04:11.246 13:32:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:11.246 13:32:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56898 00:04:11.246 13:32:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.246 13:32:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:11.246 13:32:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:11.246 [2024-10-01 13:32:02.950149] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:11.246 [2024-10-01 13:32:02.950253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56898 ] 00:04:11.246 [2024-10-01 13:32:03.088972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.505 [2024-10-01 13:32:03.146434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.505 [2024-10-01 13:32:03.189074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56898 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 56898 ']' 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 56898 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56898 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:16.772 killing process with pid 56898 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56898' 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 56898 00:04:16.772 13:32:07 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 56898 00:04:16.772 00:04:16.772 real 0m5.306s 00:04:16.772 user 0m5.027s 00:04:16.772 sys 0m0.193s 00:04:16.772 13:32:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.772 13:32:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.772 ************************************ 00:04:16.772 END TEST skip_rpc 00:04:16.772 ************************************ 00:04:16.772 13:32:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:16.772 13:32:08 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.772 13:32:08 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.772 13:32:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.772 ************************************ 00:04:16.772 START TEST skip_rpc_with_json 00:04:16.772 ************************************ 00:04:16.772 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:16.772 13:32:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:16.772 13:32:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56979 00:04:16.772 13:32:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:16.772 13:32:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.772 13:32:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56979 00:04:16.772 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 56979 ']' 00:04:16.772 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.772 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:16.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.772 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.772 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:16.772 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.772 [2024-10-01 13:32:08.320716] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:16.772 [2024-10-01 13:32:08.320849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56979 ] 00:04:16.772 [2024-10-01 13:32:08.465657] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.772 [2024-10-01 13:32:08.524106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.772 [2024-10-01 13:32:08.563468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:17.031 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:17.031 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:17.031 13:32:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:17.031 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.031 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.031 [2024-10-01 13:32:08.687263] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:17.031 request: 00:04:17.031 { 00:04:17.031 "trtype": "tcp", 00:04:17.031 "method": "nvmf_get_transports", 00:04:17.031 "req_id": 1 00:04:17.031 } 00:04:17.031 Got JSON-RPC error response 00:04:17.031 response: 00:04:17.031 { 00:04:17.031 "code": -19, 00:04:17.031 "message": "No such device" 00:04:17.031 } 00:04:17.031 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:17.031 13:32:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:17.031 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.031 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.031 [2024-10-01 13:32:08.699367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:17.031 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.031 13:32:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:17.031 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.031 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.031 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.031 13:32:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:17.031 { 00:04:17.031 "subsystems": [ 00:04:17.031 { 00:04:17.031 "subsystem": "fsdev", 00:04:17.031 "config": [ 00:04:17.031 { 00:04:17.031 "method": "fsdev_set_opts", 00:04:17.031 "params": { 00:04:17.031 "fsdev_io_pool_size": 65535, 00:04:17.031 "fsdev_io_cache_size": 256 00:04:17.031 } 00:04:17.031 } 00:04:17.031 ] 00:04:17.031 }, 00:04:17.031 { 00:04:17.031 "subsystem": "keyring", 00:04:17.031 "config": [] 00:04:17.031 }, 00:04:17.031 { 00:04:17.031 "subsystem": "iobuf", 00:04:17.031 "config": [ 00:04:17.031 { 00:04:17.031 "method": "iobuf_set_options", 00:04:17.031 "params": { 00:04:17.031 "small_pool_count": 8192, 00:04:17.031 "large_pool_count": 1024, 00:04:17.031 "small_bufsize": 8192, 00:04:17.031 "large_bufsize": 135168 00:04:17.031 } 00:04:17.031 } 00:04:17.031 ] 00:04:17.031 }, 00:04:17.031 { 00:04:17.031 "subsystem": "sock", 00:04:17.031 "config": [ 00:04:17.031 { 00:04:17.031 "method": "sock_set_default_impl", 00:04:17.031 "params": { 00:04:17.031 "impl_name": "uring" 00:04:17.031 } 00:04:17.031 }, 00:04:17.031 { 00:04:17.031 "method": "sock_impl_set_options", 00:04:17.031 "params": { 00:04:17.031 "impl_name": "ssl", 00:04:17.031 "recv_buf_size": 4096, 00:04:17.031 "send_buf_size": 4096, 00:04:17.031 "enable_recv_pipe": true, 00:04:17.031 "enable_quickack": false, 00:04:17.031 "enable_placement_id": 0, 00:04:17.031 "enable_zerocopy_send_server": true, 00:04:17.031 "enable_zerocopy_send_client": false, 00:04:17.031 "zerocopy_threshold": 0, 00:04:17.031 "tls_version": 0, 00:04:17.031 "enable_ktls": false 00:04:17.031 } 00:04:17.031 }, 00:04:17.031 { 00:04:17.031 "method": "sock_impl_set_options", 00:04:17.031 "params": { 00:04:17.031 "impl_name": "posix", 00:04:17.031 "recv_buf_size": 2097152, 00:04:17.031 "send_buf_size": 2097152, 00:04:17.031 "enable_recv_pipe": true, 00:04:17.031 "enable_quickack": false, 00:04:17.031 "enable_placement_id": 0, 00:04:17.031 "enable_zerocopy_send_server": true, 00:04:17.031 "enable_zerocopy_send_client": false, 00:04:17.031 "zerocopy_threshold": 0, 00:04:17.031 "tls_version": 0, 00:04:17.031 "enable_ktls": false 00:04:17.031 } 00:04:17.031 }, 00:04:17.031 { 00:04:17.031 "method": "sock_impl_set_options", 00:04:17.031 "params": { 00:04:17.031 "impl_name": "uring", 00:04:17.031 "recv_buf_size": 2097152, 00:04:17.032 "send_buf_size": 2097152, 00:04:17.032 "enable_recv_pipe": true, 00:04:17.032 "enable_quickack": false, 00:04:17.032 "enable_placement_id": 0, 00:04:17.032 "enable_zerocopy_send_server": false, 00:04:17.032 "enable_zerocopy_send_client": false, 00:04:17.032 "zerocopy_threshold": 0, 00:04:17.032 "tls_version": 0, 00:04:17.032 "enable_ktls": false 00:04:17.032 } 00:04:17.032 } 00:04:17.032 ] 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "subsystem": "vmd", 00:04:17.032 "config": [] 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "subsystem": "accel", 00:04:17.032 "config": [ 00:04:17.032 { 00:04:17.032 "method": "accel_set_options", 00:04:17.032 "params": { 00:04:17.032 "small_cache_size": 128, 00:04:17.032 "large_cache_size": 16, 00:04:17.032 "task_count": 2048, 00:04:17.032 "sequence_count": 2048, 00:04:17.032 "buf_count": 2048 00:04:17.032 } 00:04:17.032 } 00:04:17.032 ] 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "subsystem": "bdev", 00:04:17.032 "config": [ 00:04:17.032 { 00:04:17.032 "method": "bdev_set_options", 00:04:17.032 "params": { 00:04:17.032 "bdev_io_pool_size": 65535, 00:04:17.032 "bdev_io_cache_size": 256, 00:04:17.032 "bdev_auto_examine": true, 00:04:17.032 "iobuf_small_cache_size": 128, 00:04:17.032 "iobuf_large_cache_size": 16 00:04:17.032 } 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "method": "bdev_raid_set_options", 00:04:17.032 "params": { 00:04:17.032 "process_window_size_kb": 1024, 00:04:17.032 "process_max_bandwidth_mb_sec": 0 00:04:17.032 } 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "method": "bdev_iscsi_set_options", 00:04:17.032 "params": { 00:04:17.032 "timeout_sec": 30 00:04:17.032 } 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "method": "bdev_nvme_set_options", 00:04:17.032 "params": { 00:04:17.032 "action_on_timeout": "none", 00:04:17.032 "timeout_us": 0, 00:04:17.032 "timeout_admin_us": 0, 00:04:17.032 "keep_alive_timeout_ms": 10000, 00:04:17.032 "arbitration_burst": 0, 00:04:17.032 "low_priority_weight": 0, 00:04:17.032 "medium_priority_weight": 0, 00:04:17.032 "high_priority_weight": 0, 00:04:17.032 "nvme_adminq_poll_period_us": 10000, 00:04:17.032 "nvme_ioq_poll_period_us": 0, 00:04:17.032 "io_queue_requests": 0, 00:04:17.032 "delay_cmd_submit": true, 00:04:17.032 "transport_retry_count": 4, 00:04:17.032 "bdev_retry_count": 3, 00:04:17.032 "transport_ack_timeout": 0, 00:04:17.032 "ctrlr_loss_timeout_sec": 0, 00:04:17.032 "reconnect_delay_sec": 0, 00:04:17.032 "fast_io_fail_timeout_sec": 0, 00:04:17.032 "disable_auto_failback": false, 00:04:17.032 "generate_uuids": false, 00:04:17.032 "transport_tos": 0, 00:04:17.032 "nvme_error_stat": false, 00:04:17.032 "rdma_srq_size": 0, 00:04:17.032 "io_path_stat": false, 00:04:17.032 "allow_accel_sequence": false, 00:04:17.032 "rdma_max_cq_size": 0, 00:04:17.032 "rdma_cm_event_timeout_ms": 0, 00:04:17.032 "dhchap_digests": [ 00:04:17.032 "sha256", 00:04:17.032 "sha384", 00:04:17.032 "sha512" 00:04:17.032 ], 00:04:17.032 "dhchap_dhgroups": [ 00:04:17.032 "null", 00:04:17.032 "ffdhe2048", 00:04:17.032 "ffdhe3072", 00:04:17.032 "ffdhe4096", 00:04:17.032 "ffdhe6144", 00:04:17.032 "ffdhe8192" 00:04:17.032 ] 00:04:17.032 } 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "method": "bdev_nvme_set_hotplug", 00:04:17.032 "params": { 00:04:17.032 "period_us": 100000, 00:04:17.032 "enable": false 00:04:17.032 } 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "method": "bdev_wait_for_examine" 00:04:17.032 } 00:04:17.032 ] 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "subsystem": "scsi", 00:04:17.032 "config": null 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "subsystem": "scheduler", 00:04:17.032 "config": [ 00:04:17.032 { 00:04:17.032 "method": "framework_set_scheduler", 00:04:17.032 "params": { 00:04:17.032 "name": "static" 00:04:17.032 } 00:04:17.032 } 00:04:17.032 ] 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "subsystem": "vhost_scsi", 00:04:17.032 "config": [] 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "subsystem": "vhost_blk", 00:04:17.032 "config": [] 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "subsystem": "ublk", 00:04:17.032 "config": [] 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "subsystem": "nbd", 00:04:17.032 "config": [] 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "subsystem": "nvmf", 00:04:17.032 "config": [ 00:04:17.032 { 00:04:17.032 "method": "nvmf_set_config", 00:04:17.032 "params": { 00:04:17.032 "discovery_filter": "match_any", 00:04:17.032 "admin_cmd_passthru": { 00:04:17.032 "identify_ctrlr": false 00:04:17.032 }, 00:04:17.032 "dhchap_digests": [ 00:04:17.032 "sha256", 00:04:17.032 "sha384", 00:04:17.032 "sha512" 00:04:17.032 ], 00:04:17.032 "dhchap_dhgroups": [ 00:04:17.032 "null", 00:04:17.032 "ffdhe2048", 00:04:17.032 "ffdhe3072", 00:04:17.032 "ffdhe4096", 00:04:17.032 "ffdhe6144", 00:04:17.032 "ffdhe8192" 00:04:17.032 ] 00:04:17.032 } 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "method": "nvmf_set_max_subsystems", 00:04:17.032 "params": { 00:04:17.032 "max_subsystems": 1024 00:04:17.032 } 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "method": "nvmf_set_crdt", 00:04:17.032 "params": { 00:04:17.032 "crdt1": 0, 00:04:17.032 "crdt2": 0, 00:04:17.032 "crdt3": 0 00:04:17.032 } 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "method": "nvmf_create_transport", 00:04:17.032 "params": { 00:04:17.032 "trtype": "TCP", 00:04:17.032 "max_queue_depth": 128, 00:04:17.032 "max_io_qpairs_per_ctrlr": 127, 00:04:17.032 "in_capsule_data_size": 4096, 00:04:17.032 "max_io_size": 131072, 00:04:17.032 "io_unit_size": 131072, 00:04:17.032 "max_aq_depth": 128, 00:04:17.032 "num_shared_buffers": 511, 00:04:17.032 "buf_cache_size": 4294967295, 00:04:17.032 "dif_insert_or_strip": false, 00:04:17.032 "zcopy": false, 00:04:17.032 "c2h_success": true, 00:04:17.032 "sock_priority": 0, 00:04:17.032 "abort_timeout_sec": 1, 00:04:17.032 "ack_timeout": 0, 00:04:17.032 "data_wr_pool_size": 0 00:04:17.032 } 00:04:17.032 } 00:04:17.032 ] 00:04:17.032 }, 00:04:17.032 { 00:04:17.032 "subsystem": "iscsi", 00:04:17.032 "config": [ 00:04:17.032 { 00:04:17.032 "method": "iscsi_set_options", 00:04:17.032 "params": { 00:04:17.032 "node_base": "iqn.2016-06.io.spdk", 00:04:17.032 "max_sessions": 128, 00:04:17.032 "max_connections_per_session": 2, 00:04:17.032 "max_queue_depth": 64, 00:04:17.032 "default_time2wait": 2, 00:04:17.032 "default_time2retain": 20, 00:04:17.032 "first_burst_length": 8192, 00:04:17.032 "immediate_data": true, 00:04:17.032 "allow_duplicated_isid": false, 00:04:17.032 "error_recovery_level": 0, 00:04:17.032 "nop_timeout": 60, 00:04:17.032 "nop_in_interval": 30, 00:04:17.032 "disable_chap": false, 00:04:17.032 "require_chap": false, 00:04:17.032 "mutual_chap": false, 00:04:17.032 "chap_group": 0, 00:04:17.032 "max_large_datain_per_connection": 64, 00:04:17.032 "max_r2t_per_connection": 4, 00:04:17.032 "pdu_pool_size": 36864, 00:04:17.032 "immediate_data_pool_size": 16384, 00:04:17.032 "data_out_pool_size": 2048 00:04:17.032 } 00:04:17.032 } 00:04:17.032 ] 00:04:17.032 } 00:04:17.032 ] 00:04:17.032 } 00:04:17.032 13:32:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:17.032 13:32:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56979 00:04:17.032 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 56979 ']' 00:04:17.032 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 56979 00:04:17.032 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:17.032 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:17.032 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56979 00:04:17.291 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:17.291 killing process with pid 56979 00:04:17.291 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:17.291 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56979' 00:04:17.291 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 56979 00:04:17.291 13:32:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 56979 00:04:17.550 13:32:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56999 00:04:17.550 13:32:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:17.550 13:32:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:22.813 13:32:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56999 00:04:22.813 13:32:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 56999 ']' 00:04:22.813 13:32:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 56999 00:04:22.813 13:32:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:22.813 13:32:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:22.813 13:32:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56999 00:04:22.813 13:32:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:22.813 13:32:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:22.813 killing process with pid 56999 00:04:22.813 13:32:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56999' 00:04:22.813 13:32:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 56999 00:04:22.813 13:32:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 56999 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:22.814 00:04:22.814 real 0m6.249s 00:04:22.814 user 0m6.049s 00:04:22.814 sys 0m0.433s 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.814 ************************************ 00:04:22.814 END TEST skip_rpc_with_json 00:04:22.814 ************************************ 00:04:22.814 13:32:14 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:22.814 13:32:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.814 13:32:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.814 13:32:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.814 ************************************ 00:04:22.814 START TEST skip_rpc_with_delay 00:04:22.814 ************************************ 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.814 [2024-10-01 13:32:14.602734] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:22.814 [2024-10-01 13:32:14.602861] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:22.814 00:04:22.814 real 0m0.084s 00:04:22.814 user 0m0.056s 00:04:22.814 sys 0m0.028s 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.814 13:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:22.814 ************************************ 00:04:22.814 END TEST skip_rpc_with_delay 00:04:22.814 ************************************ 00:04:22.814 13:32:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:22.814 13:32:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:22.814 13:32:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:22.814 13:32:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.814 13:32:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.814 13:32:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.073 ************************************ 00:04:23.073 START TEST exit_on_failed_rpc_init 00:04:23.073 ************************************ 00:04:23.073 13:32:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:23.073 13:32:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57109 00:04:23.073 13:32:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:23.073 13:32:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57109 00:04:23.073 13:32:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57109 ']' 00:04:23.073 13:32:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.073 13:32:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:23.073 13:32:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.073 13:32:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:23.073 13:32:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:23.073 [2024-10-01 13:32:14.734185] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:23.073 [2024-10-01 13:32:14.734278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57109 ] 00:04:23.073 [2024-10-01 13:32:14.866231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.073 [2024-10-01 13:32:14.925646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.333 [2024-10-01 13:32:14.965665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:23.333 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.333 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:23.333 13:32:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.333 13:32:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.333 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:23.333 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.333 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.333 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.333 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.333 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.333 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.333 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.333 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.333 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:23.333 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.333 [2024-10-01 13:32:15.178813] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:23.333 [2024-10-01 13:32:15.178938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57119 ] 00:04:23.592 [2024-10-01 13:32:15.326137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.592 [2024-10-01 13:32:15.411938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.592 [2024-10-01 13:32:15.412031] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:23.592 [2024-10-01 13:32:15.412049] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:23.592 [2024-10-01 13:32:15.412061] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57109 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57109 ']' 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57109 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57109 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:23.851 killing process with pid 57109 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57109' 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57109 00:04:23.851 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57109 00:04:24.110 00:04:24.110 real 0m1.148s 00:04:24.110 user 0m1.403s 00:04:24.110 sys 0m0.295s 00:04:24.110 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.110 13:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.110 ************************************ 00:04:24.110 END TEST exit_on_failed_rpc_init 00:04:24.110 ************************************ 00:04:24.110 13:32:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:24.110 00:04:24.110 real 0m13.183s 00:04:24.110 user 0m12.724s 00:04:24.110 sys 0m1.147s 00:04:24.110 13:32:15 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.110 13:32:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.110 ************************************ 00:04:24.110 END TEST skip_rpc 00:04:24.110 ************************************ 00:04:24.110 13:32:15 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:24.110 13:32:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.110 13:32:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.110 13:32:15 -- common/autotest_common.sh@10 -- # set +x 00:04:24.110 ************************************ 00:04:24.110 START TEST rpc_client 00:04:24.110 ************************************ 00:04:24.110 13:32:15 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:24.369 * Looking for test storage... 00:04:24.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:24.369 13:32:15 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:24.369 13:32:15 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:24.369 13:32:15 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:24.369 13:32:16 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.369 13:32:16 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:24.369 13:32:16 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.369 13:32:16 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:24.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.369 --rc genhtml_branch_coverage=1 00:04:24.369 --rc genhtml_function_coverage=1 00:04:24.369 --rc genhtml_legend=1 00:04:24.369 --rc geninfo_all_blocks=1 00:04:24.369 --rc geninfo_unexecuted_blocks=1 00:04:24.369 00:04:24.369 ' 00:04:24.369 13:32:16 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:24.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.369 --rc genhtml_branch_coverage=1 00:04:24.369 --rc genhtml_function_coverage=1 00:04:24.369 --rc genhtml_legend=1 00:04:24.369 --rc geninfo_all_blocks=1 00:04:24.369 --rc geninfo_unexecuted_blocks=1 00:04:24.369 00:04:24.369 ' 00:04:24.369 13:32:16 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:24.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.369 --rc genhtml_branch_coverage=1 00:04:24.369 --rc genhtml_function_coverage=1 00:04:24.369 --rc genhtml_legend=1 00:04:24.369 --rc geninfo_all_blocks=1 00:04:24.369 --rc geninfo_unexecuted_blocks=1 00:04:24.369 00:04:24.369 ' 00:04:24.369 13:32:16 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:24.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.369 --rc genhtml_branch_coverage=1 00:04:24.369 --rc genhtml_function_coverage=1 00:04:24.369 --rc genhtml_legend=1 00:04:24.369 --rc geninfo_all_blocks=1 00:04:24.369 --rc geninfo_unexecuted_blocks=1 00:04:24.369 00:04:24.369 ' 00:04:24.369 13:32:16 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:24.369 OK 00:04:24.369 13:32:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:24.369 00:04:24.369 real 0m0.186s 00:04:24.369 user 0m0.120s 00:04:24.369 sys 0m0.078s 00:04:24.369 13:32:16 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.369 13:32:16 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:24.369 ************************************ 00:04:24.369 END TEST rpc_client 00:04:24.369 ************************************ 00:04:24.369 13:32:16 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:24.369 13:32:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.369 13:32:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.369 13:32:16 -- common/autotest_common.sh@10 -- # set +x 00:04:24.369 ************************************ 00:04:24.369 START TEST json_config 00:04:24.369 ************************************ 00:04:24.369 13:32:16 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:24.369 13:32:16 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:24.369 13:32:16 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:24.370 13:32:16 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:24.629 13:32:16 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:24.629 13:32:16 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.629 13:32:16 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.629 13:32:16 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.629 13:32:16 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.629 13:32:16 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.629 13:32:16 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.629 13:32:16 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.629 13:32:16 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.629 13:32:16 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.629 13:32:16 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.629 13:32:16 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.629 13:32:16 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:24.629 13:32:16 json_config -- scripts/common.sh@345 -- # : 1 00:04:24.629 13:32:16 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.629 13:32:16 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.629 13:32:16 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:24.629 13:32:16 json_config -- scripts/common.sh@353 -- # local d=1 00:04:24.629 13:32:16 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.629 13:32:16 json_config -- scripts/common.sh@355 -- # echo 1 00:04:24.629 13:32:16 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.629 13:32:16 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:24.629 13:32:16 json_config -- scripts/common.sh@353 -- # local d=2 00:04:24.629 13:32:16 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.629 13:32:16 json_config -- scripts/common.sh@355 -- # echo 2 00:04:24.629 13:32:16 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.629 13:32:16 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.629 13:32:16 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.629 13:32:16 json_config -- scripts/common.sh@368 -- # return 0 00:04:24.629 13:32:16 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.629 13:32:16 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:24.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.629 --rc genhtml_branch_coverage=1 00:04:24.629 --rc genhtml_function_coverage=1 00:04:24.629 --rc genhtml_legend=1 00:04:24.629 --rc geninfo_all_blocks=1 00:04:24.629 --rc geninfo_unexecuted_blocks=1 00:04:24.629 00:04:24.629 ' 00:04:24.629 13:32:16 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:24.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.629 --rc genhtml_branch_coverage=1 00:04:24.629 --rc genhtml_function_coverage=1 00:04:24.629 --rc genhtml_legend=1 00:04:24.629 --rc geninfo_all_blocks=1 00:04:24.629 --rc geninfo_unexecuted_blocks=1 00:04:24.629 00:04:24.629 ' 00:04:24.629 13:32:16 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:24.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.629 --rc genhtml_branch_coverage=1 00:04:24.629 --rc genhtml_function_coverage=1 00:04:24.629 --rc genhtml_legend=1 00:04:24.629 --rc geninfo_all_blocks=1 00:04:24.629 --rc geninfo_unexecuted_blocks=1 00:04:24.629 00:04:24.629 ' 00:04:24.629 13:32:16 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:24.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.629 --rc genhtml_branch_coverage=1 00:04:24.629 --rc genhtml_function_coverage=1 00:04:24.630 --rc genhtml_legend=1 00:04:24.630 --rc geninfo_all_blocks=1 00:04:24.630 --rc geninfo_unexecuted_blocks=1 00:04:24.630 00:04:24.630 ' 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:24.630 13:32:16 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:24.630 13:32:16 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.630 13:32:16 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.630 13:32:16 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.630 13:32:16 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.630 13:32:16 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.630 13:32:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.630 13:32:16 json_config -- paths/export.sh@5 -- # export PATH 00:04:24.630 13:32:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@51 -- # : 0 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:24.630 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:24.630 13:32:16 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:24.630 INFO: JSON configuration test init 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:24.630 13:32:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.630 13:32:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:24.630 13:32:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.630 13:32:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.630 13:32:16 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:24.630 13:32:16 json_config -- json_config/common.sh@9 -- # local app=target 00:04:24.630 13:32:16 json_config -- json_config/common.sh@10 -- # shift 00:04:24.630 13:32:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.630 13:32:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.630 13:32:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.630 13:32:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.630 13:32:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.630 13:32:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57253 00:04:24.630 Waiting for target to run... 00:04:24.630 13:32:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.630 13:32:16 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:24.630 13:32:16 json_config -- json_config/common.sh@25 -- # waitforlisten 57253 /var/tmp/spdk_tgt.sock 00:04:24.630 13:32:16 json_config -- common/autotest_common.sh@831 -- # '[' -z 57253 ']' 00:04:24.630 13:32:16 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.630 13:32:16 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:24.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.630 13:32:16 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.630 13:32:16 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:24.630 13:32:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.630 [2024-10-01 13:32:16.413654] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:24.630 [2024-10-01 13:32:16.413758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57253 ] 00:04:24.889 [2024-10-01 13:32:16.719297] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.148 [2024-10-01 13:32:16.774074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.716 13:32:17 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:25.716 00:04:25.716 13:32:17 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:25.716 13:32:17 json_config -- json_config/common.sh@26 -- # echo '' 00:04:25.716 13:32:17 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:25.716 13:32:17 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:25.716 13:32:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.716 13:32:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.716 13:32:17 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:25.716 13:32:17 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:25.716 13:32:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:25.716 13:32:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.716 13:32:17 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:25.716 13:32:17 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:25.716 13:32:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:25.976 [2024-10-01 13:32:17.792252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:26.251 13:32:17 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:26.251 13:32:17 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:26.251 13:32:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.251 13:32:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.251 13:32:17 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:26.251 13:32:17 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:26.251 13:32:17 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:26.251 13:32:17 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:26.251 13:32:17 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:26.251 13:32:17 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:26.251 13:32:17 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:26.251 13:32:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@54 -- # sort 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:26.521 13:32:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:26.521 13:32:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:26.521 13:32:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.521 13:32:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:26.521 13:32:18 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.521 13:32:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.780 MallocForNvmf0 00:04:26.780 13:32:18 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.780 13:32:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:27.038 MallocForNvmf1 00:04:27.038 13:32:18 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:27.038 13:32:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:27.301 [2024-10-01 13:32:19.094744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.301 13:32:19 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.301 13:32:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.561 13:32:19 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.561 13:32:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.820 13:32:19 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.820 13:32:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:28.080 13:32:19 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:28.080 13:32:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:28.339 [2024-10-01 13:32:20.167365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:28.339 13:32:20 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:28.339 13:32:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.339 13:32:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.598 13:32:20 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:28.598 13:32:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.598 13:32:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.598 13:32:20 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:28.598 13:32:20 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.598 13:32:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.856 MallocBdevForConfigChangeCheck 00:04:28.856 13:32:20 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:28.856 13:32:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.856 13:32:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.856 13:32:20 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:28.856 13:32:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.423 INFO: shutting down applications... 00:04:29.423 13:32:20 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:29.423 13:32:20 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:29.423 13:32:20 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:29.423 13:32:20 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:29.423 13:32:20 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:29.682 Calling clear_iscsi_subsystem 00:04:29.682 Calling clear_nvmf_subsystem 00:04:29.682 Calling clear_nbd_subsystem 00:04:29.682 Calling clear_ublk_subsystem 00:04:29.682 Calling clear_vhost_blk_subsystem 00:04:29.682 Calling clear_vhost_scsi_subsystem 00:04:29.682 Calling clear_bdev_subsystem 00:04:29.682 13:32:21 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:29.682 13:32:21 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:29.682 13:32:21 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:29.682 13:32:21 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.682 13:32:21 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:29.682 13:32:21 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:29.940 13:32:21 json_config -- json_config/json_config.sh@352 -- # break 00:04:29.940 13:32:21 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:29.940 13:32:21 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:29.940 13:32:21 json_config -- json_config/common.sh@31 -- # local app=target 00:04:29.940 13:32:21 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:29.940 13:32:21 json_config -- json_config/common.sh@35 -- # [[ -n 57253 ]] 00:04:29.940 13:32:21 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57253 00:04:29.940 13:32:21 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:29.940 13:32:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.940 13:32:21 json_config -- json_config/common.sh@41 -- # kill -0 57253 00:04:29.940 13:32:21 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.508 13:32:22 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.508 13:32:22 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.508 13:32:22 json_config -- json_config/common.sh@41 -- # kill -0 57253 00:04:30.508 SPDK target shutdown done 00:04:30.508 INFO: relaunching applications... 00:04:30.508 13:32:22 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.508 13:32:22 json_config -- json_config/common.sh@43 -- # break 00:04:30.508 13:32:22 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.508 13:32:22 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.508 13:32:22 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:30.508 13:32:22 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.508 13:32:22 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.508 13:32:22 json_config -- json_config/common.sh@10 -- # shift 00:04:30.508 13:32:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.508 13:32:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.508 13:32:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.508 13:32:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.508 13:32:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.508 13:32:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57454 00:04:30.508 13:32:22 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.508 13:32:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.508 Waiting for target to run... 00:04:30.508 13:32:22 json_config -- json_config/common.sh@25 -- # waitforlisten 57454 /var/tmp/spdk_tgt.sock 00:04:30.508 13:32:22 json_config -- common/autotest_common.sh@831 -- # '[' -z 57454 ']' 00:04:30.508 13:32:22 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.508 13:32:22 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.508 13:32:22 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.508 13:32:22 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.508 13:32:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.508 [2024-10-01 13:32:22.315214] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:30.508 [2024-10-01 13:32:22.315604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57454 ] 00:04:30.766 [2024-10-01 13:32:22.605197] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.024 [2024-10-01 13:32:22.651281] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.024 [2024-10-01 13:32:22.783521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:31.281 [2024-10-01 13:32:22.985770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.282 [2024-10-01 13:32:23.017870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.599 00:04:31.599 INFO: Checking if target configuration is the same... 00:04:31.599 13:32:23 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.599 13:32:23 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:31.599 13:32:23 json_config -- json_config/common.sh@26 -- # echo '' 00:04:31.599 13:32:23 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:31.599 13:32:23 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:31.599 13:32:23 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:31.599 13:32:23 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.599 13:32:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.599 + '[' 2 -ne 2 ']' 00:04:31.599 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:31.599 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:31.599 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:31.599 +++ basename /dev/fd/62 00:04:31.599 ++ mktemp /tmp/62.XXX 00:04:31.599 + tmp_file_1=/tmp/62.fdc 00:04:31.599 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.599 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.599 + tmp_file_2=/tmp/spdk_tgt_config.json.BRr 00:04:31.599 + ret=0 00:04:31.599 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.167 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.167 + diff -u /tmp/62.fdc /tmp/spdk_tgt_config.json.BRr 00:04:32.167 INFO: JSON config files are the same 00:04:32.167 + echo 'INFO: JSON config files are the same' 00:04:32.167 + rm /tmp/62.fdc /tmp/spdk_tgt_config.json.BRr 00:04:32.167 + exit 0 00:04:32.167 INFO: changing configuration and checking if this can be detected... 00:04:32.167 13:32:23 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:32.167 13:32:23 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:32.167 13:32:23 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:32.167 13:32:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:32.425 13:32:24 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:32.425 13:32:24 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.425 13:32:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.425 + '[' 2 -ne 2 ']' 00:04:32.425 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:32.425 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:32.425 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:32.425 +++ basename /dev/fd/62 00:04:32.425 ++ mktemp /tmp/62.XXX 00:04:32.425 + tmp_file_1=/tmp/62.6vj 00:04:32.425 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.425 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:32.425 + tmp_file_2=/tmp/spdk_tgt_config.json.m2S 00:04:32.425 + ret=0 00:04:32.425 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.682 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.940 + diff -u /tmp/62.6vj /tmp/spdk_tgt_config.json.m2S 00:04:32.940 + ret=1 00:04:32.940 + echo '=== Start of file: /tmp/62.6vj ===' 00:04:32.940 + cat /tmp/62.6vj 00:04:32.940 + echo '=== End of file: /tmp/62.6vj ===' 00:04:32.940 + echo '' 00:04:32.940 + echo '=== Start of file: /tmp/spdk_tgt_config.json.m2S ===' 00:04:32.940 + cat /tmp/spdk_tgt_config.json.m2S 00:04:32.940 + echo '=== End of file: /tmp/spdk_tgt_config.json.m2S ===' 00:04:32.940 + echo '' 00:04:32.940 + rm /tmp/62.6vj /tmp/spdk_tgt_config.json.m2S 00:04:32.940 + exit 1 00:04:32.940 INFO: configuration change detected. 00:04:32.940 13:32:24 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:32.940 13:32:24 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:32.940 13:32:24 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.940 13:32:24 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:32.940 13:32:24 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:32.940 13:32:24 json_config -- json_config/json_config.sh@324 -- # [[ -n 57454 ]] 00:04:32.940 13:32:24 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:32.940 13:32:24 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.940 13:32:24 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:32.940 13:32:24 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:32.940 13:32:24 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:32.940 13:32:24 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:32.940 13:32:24 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:32.940 13:32:24 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.940 13:32:24 json_config -- json_config/json_config.sh@330 -- # killprocess 57454 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@950 -- # '[' -z 57454 ']' 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@954 -- # kill -0 57454 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@955 -- # uname 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57454 00:04:32.940 killing process with pid 57454 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57454' 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@969 -- # kill 57454 00:04:32.940 13:32:24 json_config -- common/autotest_common.sh@974 -- # wait 57454 00:04:33.198 13:32:24 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:33.198 13:32:24 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:33.198 13:32:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:33.198 13:32:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.198 INFO: Success 00:04:33.198 13:32:24 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:33.198 13:32:24 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:33.198 ************************************ 00:04:33.198 END TEST json_config 00:04:33.198 ************************************ 00:04:33.198 00:04:33.198 real 0m8.747s 00:04:33.198 user 0m12.951s 00:04:33.198 sys 0m1.403s 00:04:33.198 13:32:24 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.198 13:32:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.198 13:32:24 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:33.198 13:32:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.198 13:32:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.198 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:33.198 ************************************ 00:04:33.198 START TEST json_config_extra_key 00:04:33.198 ************************************ 00:04:33.198 13:32:24 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:33.198 13:32:24 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:33.198 13:32:24 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:33.198 13:32:24 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:33.458 13:32:25 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:33.458 13:32:25 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.458 13:32:25 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:33.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.458 --rc genhtml_branch_coverage=1 00:04:33.458 --rc genhtml_function_coverage=1 00:04:33.458 --rc genhtml_legend=1 00:04:33.458 --rc geninfo_all_blocks=1 00:04:33.458 --rc geninfo_unexecuted_blocks=1 00:04:33.458 00:04:33.458 ' 00:04:33.458 13:32:25 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:33.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.458 --rc genhtml_branch_coverage=1 00:04:33.458 --rc genhtml_function_coverage=1 00:04:33.458 --rc genhtml_legend=1 00:04:33.458 --rc geninfo_all_blocks=1 00:04:33.458 --rc geninfo_unexecuted_blocks=1 00:04:33.458 00:04:33.458 ' 00:04:33.458 13:32:25 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:33.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.458 --rc genhtml_branch_coverage=1 00:04:33.458 --rc genhtml_function_coverage=1 00:04:33.458 --rc genhtml_legend=1 00:04:33.458 --rc geninfo_all_blocks=1 00:04:33.458 --rc geninfo_unexecuted_blocks=1 00:04:33.458 00:04:33.458 ' 00:04:33.458 13:32:25 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:33.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.458 --rc genhtml_branch_coverage=1 00:04:33.458 --rc genhtml_function_coverage=1 00:04:33.458 --rc genhtml_legend=1 00:04:33.458 --rc geninfo_all_blocks=1 00:04:33.458 --rc geninfo_unexecuted_blocks=1 00:04:33.458 00:04:33.458 ' 00:04:33.458 13:32:25 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.458 13:32:25 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.458 13:32:25 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.458 13:32:25 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.458 13:32:25 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.458 13:32:25 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:33.458 13:32:25 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:33.458 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:33.458 13:32:25 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:33.458 13:32:25 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:33.458 13:32:25 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:33.458 13:32:25 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:33.458 13:32:25 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:33.458 13:32:25 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:33.458 13:32:25 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:33.458 13:32:25 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:33.458 13:32:25 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:33.458 13:32:25 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:33.458 13:32:25 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:33.458 13:32:25 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:33.459 INFO: launching applications... 00:04:33.459 13:32:25 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:33.459 13:32:25 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:33.459 13:32:25 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:33.459 13:32:25 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.459 13:32:25 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.459 13:32:25 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.459 13:32:25 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.459 13:32:25 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.459 13:32:25 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57604 00:04:33.459 13:32:25 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.459 Waiting for target to run... 00:04:33.459 13:32:25 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57604 /var/tmp/spdk_tgt.sock 00:04:33.459 13:32:25 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:33.459 13:32:25 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57604 ']' 00:04:33.459 13:32:25 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.459 13:32:25 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:33.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.459 13:32:25 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.459 13:32:25 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:33.459 13:32:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:33.459 [2024-10-01 13:32:25.190101] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:33.459 [2024-10-01 13:32:25.190214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57604 ] 00:04:33.717 [2024-10-01 13:32:25.493395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.717 [2024-10-01 13:32:25.539519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.717 [2024-10-01 13:32:25.566017] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:34.651 00:04:34.651 INFO: shutting down applications... 00:04:34.651 13:32:26 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:34.651 13:32:26 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:34.651 13:32:26 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:34.651 13:32:26 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:34.651 13:32:26 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:34.651 13:32:26 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:34.651 13:32:26 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:34.651 13:32:26 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57604 ]] 00:04:34.651 13:32:26 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57604 00:04:34.651 13:32:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:34.651 13:32:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.651 13:32:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57604 00:04:34.651 13:32:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.909 13:32:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.909 13:32:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.909 13:32:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57604 00:04:34.909 13:32:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.909 13:32:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:34.909 SPDK target shutdown done 00:04:34.909 Success 00:04:34.909 13:32:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.909 13:32:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.909 13:32:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:34.909 00:04:34.909 real 0m1.792s 00:04:34.909 user 0m1.720s 00:04:34.909 sys 0m0.325s 00:04:34.909 ************************************ 00:04:34.909 END TEST json_config_extra_key 00:04:34.909 ************************************ 00:04:34.909 13:32:26 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.909 13:32:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:35.167 13:32:26 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:35.167 13:32:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.167 13:32:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.167 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:04:35.167 ************************************ 00:04:35.167 START TEST alias_rpc 00:04:35.167 ************************************ 00:04:35.167 13:32:26 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:35.167 * Looking for test storage... 00:04:35.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:35.167 13:32:26 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:35.167 13:32:26 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:35.167 13:32:26 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:35.167 13:32:26 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:35.167 13:32:26 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:35.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.168 13:32:26 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:35.168 13:32:26 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.168 13:32:26 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:35.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.168 --rc genhtml_branch_coverage=1 00:04:35.168 --rc genhtml_function_coverage=1 00:04:35.168 --rc genhtml_legend=1 00:04:35.168 --rc geninfo_all_blocks=1 00:04:35.168 --rc geninfo_unexecuted_blocks=1 00:04:35.168 00:04:35.168 ' 00:04:35.168 13:32:26 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:35.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.168 --rc genhtml_branch_coverage=1 00:04:35.168 --rc genhtml_function_coverage=1 00:04:35.168 --rc genhtml_legend=1 00:04:35.168 --rc geninfo_all_blocks=1 00:04:35.168 --rc geninfo_unexecuted_blocks=1 00:04:35.168 00:04:35.168 ' 00:04:35.168 13:32:26 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:35.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.168 --rc genhtml_branch_coverage=1 00:04:35.168 --rc genhtml_function_coverage=1 00:04:35.168 --rc genhtml_legend=1 00:04:35.168 --rc geninfo_all_blocks=1 00:04:35.168 --rc geninfo_unexecuted_blocks=1 00:04:35.168 00:04:35.168 ' 00:04:35.168 13:32:26 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:35.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.168 --rc genhtml_branch_coverage=1 00:04:35.168 --rc genhtml_function_coverage=1 00:04:35.168 --rc genhtml_legend=1 00:04:35.168 --rc geninfo_all_blocks=1 00:04:35.168 --rc geninfo_unexecuted_blocks=1 00:04:35.168 00:04:35.168 ' 00:04:35.168 13:32:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:35.168 13:32:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57682 00:04:35.168 13:32:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57682 00:04:35.168 13:32:26 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57682 ']' 00:04:35.168 13:32:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.168 13:32:26 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.168 13:32:26 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.168 13:32:26 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.168 13:32:26 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.168 13:32:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.168 [2024-10-01 13:32:27.011821] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:35.168 [2024-10-01 13:32:27.012140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57682 ] 00:04:35.426 [2024-10-01 13:32:27.144385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.426 [2024-10-01 13:32:27.204557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.426 [2024-10-01 13:32:27.246053] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:35.686 13:32:27 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.686 13:32:27 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:35.686 13:32:27 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:35.944 13:32:27 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57682 00:04:35.944 13:32:27 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57682 ']' 00:04:35.944 13:32:27 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57682 00:04:35.944 13:32:27 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:35.944 13:32:27 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:35.944 13:32:27 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57682 00:04:35.944 13:32:27 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:35.944 killing process with pid 57682 00:04:35.944 13:32:27 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:35.944 13:32:27 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57682' 00:04:35.944 13:32:27 alias_rpc -- common/autotest_common.sh@969 -- # kill 57682 00:04:35.944 13:32:27 alias_rpc -- common/autotest_common.sh@974 -- # wait 57682 00:04:36.202 ************************************ 00:04:36.202 END TEST alias_rpc 00:04:36.202 ************************************ 00:04:36.202 00:04:36.202 real 0m1.227s 00:04:36.202 user 0m1.431s 00:04:36.202 sys 0m0.324s 00:04:36.202 13:32:28 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.202 13:32:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.202 13:32:28 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:36.202 13:32:28 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:36.202 13:32:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.202 13:32:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.202 13:32:28 -- common/autotest_common.sh@10 -- # set +x 00:04:36.460 ************************************ 00:04:36.460 START TEST spdkcli_tcp 00:04:36.460 ************************************ 00:04:36.460 13:32:28 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:36.460 * Looking for test storage... 00:04:36.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:36.460 13:32:28 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:36.460 13:32:28 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:36.460 13:32:28 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:36.460 13:32:28 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.461 13:32:28 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:36.461 13:32:28 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.461 13:32:28 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:36.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.461 --rc genhtml_branch_coverage=1 00:04:36.461 --rc genhtml_function_coverage=1 00:04:36.461 --rc genhtml_legend=1 00:04:36.461 --rc geninfo_all_blocks=1 00:04:36.461 --rc geninfo_unexecuted_blocks=1 00:04:36.461 00:04:36.461 ' 00:04:36.461 13:32:28 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:36.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.461 --rc genhtml_branch_coverage=1 00:04:36.461 --rc genhtml_function_coverage=1 00:04:36.461 --rc genhtml_legend=1 00:04:36.461 --rc geninfo_all_blocks=1 00:04:36.461 --rc geninfo_unexecuted_blocks=1 00:04:36.461 00:04:36.461 ' 00:04:36.461 13:32:28 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:36.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.461 --rc genhtml_branch_coverage=1 00:04:36.461 --rc genhtml_function_coverage=1 00:04:36.461 --rc genhtml_legend=1 00:04:36.461 --rc geninfo_all_blocks=1 00:04:36.461 --rc geninfo_unexecuted_blocks=1 00:04:36.461 00:04:36.461 ' 00:04:36.461 13:32:28 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:36.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.461 --rc genhtml_branch_coverage=1 00:04:36.461 --rc genhtml_function_coverage=1 00:04:36.461 --rc genhtml_legend=1 00:04:36.461 --rc geninfo_all_blocks=1 00:04:36.461 --rc geninfo_unexecuted_blocks=1 00:04:36.461 00:04:36.461 ' 00:04:36.461 13:32:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:36.461 13:32:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:36.461 13:32:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:36.461 13:32:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:36.461 13:32:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:36.461 13:32:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:36.461 13:32:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:36.461 13:32:28 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.461 13:32:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.461 13:32:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57753 00:04:36.461 13:32:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57753 00:04:36.461 13:32:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:36.461 13:32:28 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57753 ']' 00:04:36.461 13:32:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.461 13:32:28 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.461 13:32:28 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.461 13:32:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.461 13:32:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.461 [2024-10-01 13:32:28.311888] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:36.461 [2024-10-01 13:32:28.312197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57753 ] 00:04:36.719 [2024-10-01 13:32:28.442411] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.719 [2024-10-01 13:32:28.503839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.719 [2024-10-01 13:32:28.503853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.719 [2024-10-01 13:32:28.545015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:36.977 13:32:28 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:36.977 13:32:28 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:36.977 13:32:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57763 00:04:36.977 13:32:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:36.977 13:32:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:37.236 [ 00:04:37.236 "bdev_malloc_delete", 00:04:37.236 "bdev_malloc_create", 00:04:37.236 "bdev_null_resize", 00:04:37.236 "bdev_null_delete", 00:04:37.236 "bdev_null_create", 00:04:37.236 "bdev_nvme_cuse_unregister", 00:04:37.236 "bdev_nvme_cuse_register", 00:04:37.236 "bdev_opal_new_user", 00:04:37.236 "bdev_opal_set_lock_state", 00:04:37.236 "bdev_opal_delete", 00:04:37.236 "bdev_opal_get_info", 00:04:37.236 "bdev_opal_create", 00:04:37.236 "bdev_nvme_opal_revert", 00:04:37.236 "bdev_nvme_opal_init", 00:04:37.236 "bdev_nvme_send_cmd", 00:04:37.236 "bdev_nvme_set_keys", 00:04:37.236 "bdev_nvme_get_path_iostat", 00:04:37.236 "bdev_nvme_get_mdns_discovery_info", 00:04:37.236 "bdev_nvme_stop_mdns_discovery", 00:04:37.236 "bdev_nvme_start_mdns_discovery", 00:04:37.236 "bdev_nvme_set_multipath_policy", 00:04:37.236 "bdev_nvme_set_preferred_path", 00:04:37.236 "bdev_nvme_get_io_paths", 00:04:37.236 "bdev_nvme_remove_error_injection", 00:04:37.236 "bdev_nvme_add_error_injection", 00:04:37.236 "bdev_nvme_get_discovery_info", 00:04:37.236 "bdev_nvme_stop_discovery", 00:04:37.236 "bdev_nvme_start_discovery", 00:04:37.236 "bdev_nvme_get_controller_health_info", 00:04:37.236 "bdev_nvme_disable_controller", 00:04:37.236 "bdev_nvme_enable_controller", 00:04:37.236 "bdev_nvme_reset_controller", 00:04:37.236 "bdev_nvme_get_transport_statistics", 00:04:37.236 "bdev_nvme_apply_firmware", 00:04:37.236 "bdev_nvme_detach_controller", 00:04:37.236 "bdev_nvme_get_controllers", 00:04:37.236 "bdev_nvme_attach_controller", 00:04:37.236 "bdev_nvme_set_hotplug", 00:04:37.236 "bdev_nvme_set_options", 00:04:37.236 "bdev_passthru_delete", 00:04:37.236 "bdev_passthru_create", 00:04:37.236 "bdev_lvol_set_parent_bdev", 00:04:37.236 "bdev_lvol_set_parent", 00:04:37.236 "bdev_lvol_check_shallow_copy", 00:04:37.236 "bdev_lvol_start_shallow_copy", 00:04:37.236 "bdev_lvol_grow_lvstore", 00:04:37.236 "bdev_lvol_get_lvols", 00:04:37.236 "bdev_lvol_get_lvstores", 00:04:37.236 "bdev_lvol_delete", 00:04:37.236 "bdev_lvol_set_read_only", 00:04:37.236 "bdev_lvol_resize", 00:04:37.236 "bdev_lvol_decouple_parent", 00:04:37.236 "bdev_lvol_inflate", 00:04:37.236 "bdev_lvol_rename", 00:04:37.236 "bdev_lvol_clone_bdev", 00:04:37.236 "bdev_lvol_clone", 00:04:37.236 "bdev_lvol_snapshot", 00:04:37.236 "bdev_lvol_create", 00:04:37.236 "bdev_lvol_delete_lvstore", 00:04:37.236 "bdev_lvol_rename_lvstore", 00:04:37.237 "bdev_lvol_create_lvstore", 00:04:37.237 "bdev_raid_set_options", 00:04:37.237 "bdev_raid_remove_base_bdev", 00:04:37.237 "bdev_raid_add_base_bdev", 00:04:37.237 "bdev_raid_delete", 00:04:37.237 "bdev_raid_create", 00:04:37.237 "bdev_raid_get_bdevs", 00:04:37.237 "bdev_error_inject_error", 00:04:37.237 "bdev_error_delete", 00:04:37.237 "bdev_error_create", 00:04:37.237 "bdev_split_delete", 00:04:37.237 "bdev_split_create", 00:04:37.237 "bdev_delay_delete", 00:04:37.237 "bdev_delay_create", 00:04:37.237 "bdev_delay_update_latency", 00:04:37.237 "bdev_zone_block_delete", 00:04:37.237 "bdev_zone_block_create", 00:04:37.237 "blobfs_create", 00:04:37.237 "blobfs_detect", 00:04:37.237 "blobfs_set_cache_size", 00:04:37.237 "bdev_aio_delete", 00:04:37.237 "bdev_aio_rescan", 00:04:37.237 "bdev_aio_create", 00:04:37.237 "bdev_ftl_set_property", 00:04:37.237 "bdev_ftl_get_properties", 00:04:37.237 "bdev_ftl_get_stats", 00:04:37.237 "bdev_ftl_unmap", 00:04:37.237 "bdev_ftl_unload", 00:04:37.237 "bdev_ftl_delete", 00:04:37.237 "bdev_ftl_load", 00:04:37.237 "bdev_ftl_create", 00:04:37.237 "bdev_virtio_attach_controller", 00:04:37.237 "bdev_virtio_scsi_get_devices", 00:04:37.237 "bdev_virtio_detach_controller", 00:04:37.237 "bdev_virtio_blk_set_hotplug", 00:04:37.237 "bdev_iscsi_delete", 00:04:37.237 "bdev_iscsi_create", 00:04:37.237 "bdev_iscsi_set_options", 00:04:37.237 "bdev_uring_delete", 00:04:37.237 "bdev_uring_rescan", 00:04:37.237 "bdev_uring_create", 00:04:37.237 "accel_error_inject_error", 00:04:37.237 "ioat_scan_accel_module", 00:04:37.237 "dsa_scan_accel_module", 00:04:37.237 "iaa_scan_accel_module", 00:04:37.237 "keyring_file_remove_key", 00:04:37.237 "keyring_file_add_key", 00:04:37.237 "keyring_linux_set_options", 00:04:37.237 "fsdev_aio_delete", 00:04:37.237 "fsdev_aio_create", 00:04:37.237 "iscsi_get_histogram", 00:04:37.237 "iscsi_enable_histogram", 00:04:37.237 "iscsi_set_options", 00:04:37.237 "iscsi_get_auth_groups", 00:04:37.237 "iscsi_auth_group_remove_secret", 00:04:37.237 "iscsi_auth_group_add_secret", 00:04:37.237 "iscsi_delete_auth_group", 00:04:37.237 "iscsi_create_auth_group", 00:04:37.237 "iscsi_set_discovery_auth", 00:04:37.237 "iscsi_get_options", 00:04:37.237 "iscsi_target_node_request_logout", 00:04:37.237 "iscsi_target_node_set_redirect", 00:04:37.237 "iscsi_target_node_set_auth", 00:04:37.237 "iscsi_target_node_add_lun", 00:04:37.237 "iscsi_get_stats", 00:04:37.237 "iscsi_get_connections", 00:04:37.237 "iscsi_portal_group_set_auth", 00:04:37.237 "iscsi_start_portal_group", 00:04:37.237 "iscsi_delete_portal_group", 00:04:37.237 "iscsi_create_portal_group", 00:04:37.237 "iscsi_get_portal_groups", 00:04:37.237 "iscsi_delete_target_node", 00:04:37.237 "iscsi_target_node_remove_pg_ig_maps", 00:04:37.237 "iscsi_target_node_add_pg_ig_maps", 00:04:37.237 "iscsi_create_target_node", 00:04:37.237 "iscsi_get_target_nodes", 00:04:37.237 "iscsi_delete_initiator_group", 00:04:37.237 "iscsi_initiator_group_remove_initiators", 00:04:37.237 "iscsi_initiator_group_add_initiators", 00:04:37.237 "iscsi_create_initiator_group", 00:04:37.237 "iscsi_get_initiator_groups", 00:04:37.237 "nvmf_set_crdt", 00:04:37.237 "nvmf_set_config", 00:04:37.237 "nvmf_set_max_subsystems", 00:04:37.237 "nvmf_stop_mdns_prr", 00:04:37.237 "nvmf_publish_mdns_prr", 00:04:37.237 "nvmf_subsystem_get_listeners", 00:04:37.237 "nvmf_subsystem_get_qpairs", 00:04:37.237 "nvmf_subsystem_get_controllers", 00:04:37.237 "nvmf_get_stats", 00:04:37.237 "nvmf_get_transports", 00:04:37.237 "nvmf_create_transport", 00:04:37.237 "nvmf_get_targets", 00:04:37.237 "nvmf_delete_target", 00:04:37.237 "nvmf_create_target", 00:04:37.237 "nvmf_subsystem_allow_any_host", 00:04:37.237 "nvmf_subsystem_set_keys", 00:04:37.237 "nvmf_subsystem_remove_host", 00:04:37.237 "nvmf_subsystem_add_host", 00:04:37.237 "nvmf_ns_remove_host", 00:04:37.237 "nvmf_ns_add_host", 00:04:37.237 "nvmf_subsystem_remove_ns", 00:04:37.237 "nvmf_subsystem_set_ns_ana_group", 00:04:37.237 "nvmf_subsystem_add_ns", 00:04:37.237 "nvmf_subsystem_listener_set_ana_state", 00:04:37.237 "nvmf_discovery_get_referrals", 00:04:37.237 "nvmf_discovery_remove_referral", 00:04:37.237 "nvmf_discovery_add_referral", 00:04:37.237 "nvmf_subsystem_remove_listener", 00:04:37.237 "nvmf_subsystem_add_listener", 00:04:37.237 "nvmf_delete_subsystem", 00:04:37.237 "nvmf_create_subsystem", 00:04:37.237 "nvmf_get_subsystems", 00:04:37.237 "env_dpdk_get_mem_stats", 00:04:37.237 "nbd_get_disks", 00:04:37.237 "nbd_stop_disk", 00:04:37.237 "nbd_start_disk", 00:04:37.237 "ublk_recover_disk", 00:04:37.237 "ublk_get_disks", 00:04:37.237 "ublk_stop_disk", 00:04:37.237 "ublk_start_disk", 00:04:37.237 "ublk_destroy_target", 00:04:37.237 "ublk_create_target", 00:04:37.237 "virtio_blk_create_transport", 00:04:37.237 "virtio_blk_get_transports", 00:04:37.237 "vhost_controller_set_coalescing", 00:04:37.237 "vhost_get_controllers", 00:04:37.237 "vhost_delete_controller", 00:04:37.237 "vhost_create_blk_controller", 00:04:37.237 "vhost_scsi_controller_remove_target", 00:04:37.237 "vhost_scsi_controller_add_target", 00:04:37.237 "vhost_start_scsi_controller", 00:04:37.237 "vhost_create_scsi_controller", 00:04:37.237 "thread_set_cpumask", 00:04:37.237 "scheduler_set_options", 00:04:37.237 "framework_get_governor", 00:04:37.237 "framework_get_scheduler", 00:04:37.237 "framework_set_scheduler", 00:04:37.237 "framework_get_reactors", 00:04:37.237 "thread_get_io_channels", 00:04:37.237 "thread_get_pollers", 00:04:37.237 "thread_get_stats", 00:04:37.237 "framework_monitor_context_switch", 00:04:37.237 "spdk_kill_instance", 00:04:37.237 "log_enable_timestamps", 00:04:37.237 "log_get_flags", 00:04:37.237 "log_clear_flag", 00:04:37.237 "log_set_flag", 00:04:37.237 "log_get_level", 00:04:37.237 "log_set_level", 00:04:37.237 "log_get_print_level", 00:04:37.237 "log_set_print_level", 00:04:37.237 "framework_enable_cpumask_locks", 00:04:37.237 "framework_disable_cpumask_locks", 00:04:37.237 "framework_wait_init", 00:04:37.237 "framework_start_init", 00:04:37.237 "scsi_get_devices", 00:04:37.237 "bdev_get_histogram", 00:04:37.237 "bdev_enable_histogram", 00:04:37.237 "bdev_set_qos_limit", 00:04:37.237 "bdev_set_qd_sampling_period", 00:04:37.237 "bdev_get_bdevs", 00:04:37.237 "bdev_reset_iostat", 00:04:37.237 "bdev_get_iostat", 00:04:37.237 "bdev_examine", 00:04:37.237 "bdev_wait_for_examine", 00:04:37.237 "bdev_set_options", 00:04:37.237 "accel_get_stats", 00:04:37.237 "accel_set_options", 00:04:37.237 "accel_set_driver", 00:04:37.237 "accel_crypto_key_destroy", 00:04:37.237 "accel_crypto_keys_get", 00:04:37.237 "accel_crypto_key_create", 00:04:37.237 "accel_assign_opc", 00:04:37.237 "accel_get_module_info", 00:04:37.237 "accel_get_opc_assignments", 00:04:37.237 "vmd_rescan", 00:04:37.237 "vmd_remove_device", 00:04:37.237 "vmd_enable", 00:04:37.237 "sock_get_default_impl", 00:04:37.237 "sock_set_default_impl", 00:04:37.237 "sock_impl_set_options", 00:04:37.237 "sock_impl_get_options", 00:04:37.237 "iobuf_get_stats", 00:04:37.237 "iobuf_set_options", 00:04:37.237 "keyring_get_keys", 00:04:37.237 "framework_get_pci_devices", 00:04:37.237 "framework_get_config", 00:04:37.237 "framework_get_subsystems", 00:04:37.237 "fsdev_set_opts", 00:04:37.237 "fsdev_get_opts", 00:04:37.237 "trace_get_info", 00:04:37.237 "trace_get_tpoint_group_mask", 00:04:37.237 "trace_disable_tpoint_group", 00:04:37.237 "trace_enable_tpoint_group", 00:04:37.237 "trace_clear_tpoint_mask", 00:04:37.237 "trace_set_tpoint_mask", 00:04:37.237 "notify_get_notifications", 00:04:37.237 "notify_get_types", 00:04:37.237 "spdk_get_version", 00:04:37.237 "rpc_get_methods" 00:04:37.237 ] 00:04:37.237 13:32:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:37.237 13:32:28 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:37.237 13:32:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.237 13:32:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:37.237 13:32:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57753 00:04:37.237 13:32:28 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57753 ']' 00:04:37.237 13:32:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57753 00:04:37.237 13:32:28 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:37.237 13:32:28 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.237 13:32:28 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57753 00:04:37.237 killing process with pid 57753 00:04:37.237 13:32:29 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.237 13:32:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.237 13:32:29 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57753' 00:04:37.237 13:32:29 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57753 00:04:37.237 13:32:29 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57753 00:04:37.496 ************************************ 00:04:37.496 END TEST spdkcli_tcp 00:04:37.496 ************************************ 00:04:37.496 00:04:37.496 real 0m1.219s 00:04:37.496 user 0m2.085s 00:04:37.496 sys 0m0.349s 00:04:37.496 13:32:29 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.496 13:32:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.496 13:32:29 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:37.496 13:32:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.496 13:32:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.496 13:32:29 -- common/autotest_common.sh@10 -- # set +x 00:04:37.496 ************************************ 00:04:37.496 START TEST dpdk_mem_utility 00:04:37.496 ************************************ 00:04:37.496 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:37.755 * Looking for test storage... 00:04:37.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:37.755 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:37.755 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:04:37.755 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:37.755 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.755 13:32:29 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:37.755 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.755 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:37.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.755 --rc genhtml_branch_coverage=1 00:04:37.755 --rc genhtml_function_coverage=1 00:04:37.755 --rc genhtml_legend=1 00:04:37.755 --rc geninfo_all_blocks=1 00:04:37.755 --rc geninfo_unexecuted_blocks=1 00:04:37.755 00:04:37.755 ' 00:04:37.755 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:37.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.755 --rc genhtml_branch_coverage=1 00:04:37.755 --rc genhtml_function_coverage=1 00:04:37.755 --rc genhtml_legend=1 00:04:37.755 --rc geninfo_all_blocks=1 00:04:37.755 --rc geninfo_unexecuted_blocks=1 00:04:37.755 00:04:37.755 ' 00:04:37.755 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:37.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.755 --rc genhtml_branch_coverage=1 00:04:37.755 --rc genhtml_function_coverage=1 00:04:37.755 --rc genhtml_legend=1 00:04:37.755 --rc geninfo_all_blocks=1 00:04:37.755 --rc geninfo_unexecuted_blocks=1 00:04:37.755 00:04:37.755 ' 00:04:37.755 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:37.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.755 --rc genhtml_branch_coverage=1 00:04:37.755 --rc genhtml_function_coverage=1 00:04:37.755 --rc genhtml_legend=1 00:04:37.755 --rc geninfo_all_blocks=1 00:04:37.755 --rc geninfo_unexecuted_blocks=1 00:04:37.755 00:04:37.755 ' 00:04:37.755 13:32:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:37.755 13:32:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57844 00:04:37.755 13:32:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57844 00:04:37.755 13:32:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.755 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57844 ']' 00:04:37.755 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.755 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:37.755 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.755 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:37.755 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.755 [2024-10-01 13:32:29.581309] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:37.755 [2024-10-01 13:32:29.581644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57844 ] 00:04:38.014 [2024-10-01 13:32:29.713473] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.014 [2024-10-01 13:32:29.773618] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.014 [2024-10-01 13:32:29.814482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:38.278 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.278 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:38.278 13:32:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:38.278 13:32:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:38.278 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.278 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.278 { 00:04:38.278 "filename": "/tmp/spdk_mem_dump.txt" 00:04:38.278 } 00:04:38.278 13:32:29 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.278 13:32:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:38.278 DPDK memory size 860.000000 MiB in 1 heap(s) 00:04:38.278 1 heaps totaling size 860.000000 MiB 00:04:38.278 size: 860.000000 MiB heap id: 0 00:04:38.278 end heaps---------- 00:04:38.278 9 mempools totaling size 642.649841 MiB 00:04:38.278 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:38.278 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:38.278 size: 92.545471 MiB name: bdev_io_57844 00:04:38.278 size: 51.011292 MiB name: evtpool_57844 00:04:38.278 size: 50.003479 MiB name: msgpool_57844 00:04:38.278 size: 36.509338 MiB name: fsdev_io_57844 00:04:38.278 size: 21.763794 MiB name: PDU_Pool 00:04:38.278 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:38.278 size: 0.026123 MiB name: Session_Pool 00:04:38.278 end mempools------- 00:04:38.278 6 memzones totaling size 4.142822 MiB 00:04:38.278 size: 1.000366 MiB name: RG_ring_0_57844 00:04:38.278 size: 1.000366 MiB name: RG_ring_1_57844 00:04:38.278 size: 1.000366 MiB name: RG_ring_4_57844 00:04:38.278 size: 1.000366 MiB name: RG_ring_5_57844 00:04:38.278 size: 0.125366 MiB name: RG_ring_2_57844 00:04:38.278 size: 0.015991 MiB name: RG_ring_3_57844 00:04:38.278 end memzones------- 00:04:38.278 13:32:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:38.278 heap id: 0 total size: 860.000000 MiB number of busy elements: 309 number of free elements: 16 00:04:38.278 list of free elements. size: 13.936157 MiB 00:04:38.278 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:38.278 element at address: 0x200000800000 with size: 1.996948 MiB 00:04:38.278 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:04:38.278 element at address: 0x20001be00000 with size: 0.999878 MiB 00:04:38.278 element at address: 0x200034a00000 with size: 0.994446 MiB 00:04:38.278 element at address: 0x200009600000 with size: 0.959839 MiB 00:04:38.278 element at address: 0x200015e00000 with size: 0.954285 MiB 00:04:38.278 element at address: 0x20001c000000 with size: 0.936584 MiB 00:04:38.278 element at address: 0x200000200000 with size: 0.834839 MiB 00:04:38.278 element at address: 0x20001d800000 with size: 0.567505 MiB 00:04:38.278 element at address: 0x20000d800000 with size: 0.489258 MiB 00:04:38.278 element at address: 0x200003e00000 with size: 0.487915 MiB 00:04:38.278 element at address: 0x20001c200000 with size: 0.485657 MiB 00:04:38.278 element at address: 0x200007000000 with size: 0.480469 MiB 00:04:38.278 element at address: 0x20002ac00000 with size: 0.396118 MiB 00:04:38.278 element at address: 0x200003a00000 with size: 0.353027 MiB 00:04:38.278 list of standard malloc elements. size: 199.267151 MiB 00:04:38.278 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:04:38.278 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:04:38.278 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:04:38.278 element at address: 0x20001befff80 with size: 1.000122 MiB 00:04:38.278 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:04:38.278 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:38.278 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:04:38.278 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:38.278 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:04:38.278 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:38.278 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003aff880 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000707b000 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000707b180 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000707b240 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000707b300 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000707b480 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000707b540 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000707b600 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:04:38.279 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:04:38.279 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d891480 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d891540 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d891600 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d891780 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d891840 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d891900 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892080 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892140 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892200 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892380 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892440 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892500 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892680 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892740 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892800 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892980 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d893040 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d893100 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d893280 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d893340 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d893400 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d893580 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d893640 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d893700 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d893880 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d893940 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:04:38.279 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894000 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894180 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894240 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894300 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894480 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894540 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894600 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894780 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894840 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894900 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d895080 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d895140 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d895200 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d895380 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20001d895440 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac65680 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac65740 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6c340 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:04:38.280 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:04:38.280 list of memzone associated elements. size: 646.796692 MiB 00:04:38.280 element at address: 0x20001d895500 with size: 211.416748 MiB 00:04:38.280 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:38.280 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:04:38.280 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:38.280 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:04:38.280 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57844_0 00:04:38.280 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:38.280 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57844_0 00:04:38.280 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:38.280 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57844_0 00:04:38.280 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:04:38.280 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57844_0 00:04:38.280 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:04:38.280 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:38.280 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:04:38.280 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:38.280 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:38.280 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57844 00:04:38.280 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:38.280 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57844 00:04:38.280 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:38.280 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57844 00:04:38.280 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:04:38.280 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:38.280 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:04:38.280 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:38.280 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:04:38.280 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:38.280 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:04:38.280 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:38.280 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:38.280 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57844 00:04:38.280 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:38.280 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57844 00:04:38.280 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:04:38.280 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57844 00:04:38.280 element at address: 0x200034afe940 with size: 1.000488 MiB 00:04:38.281 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57844 00:04:38.281 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:04:38.281 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57844 00:04:38.281 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:04:38.281 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57844 00:04:38.281 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:04:38.281 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:38.281 element at address: 0x20000707b780 with size: 0.500488 MiB 00:04:38.281 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:38.281 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:04:38.281 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:38.281 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:04:38.281 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57844 00:04:38.281 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:04:38.281 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:38.281 element at address: 0x20002ac65800 with size: 0.023743 MiB 00:04:38.281 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:38.281 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:04:38.281 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57844 00:04:38.281 element at address: 0x20002ac6b940 with size: 0.002441 MiB 00:04:38.281 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:38.281 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:38.281 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57844 00:04:38.281 element at address: 0x200003aff940 with size: 0.000305 MiB 00:04:38.281 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57844 00:04:38.281 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:04:38.281 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57844 00:04:38.281 element at address: 0x20002ac6c400 with size: 0.000305 MiB 00:04:38.281 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:38.281 13:32:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:38.281 13:32:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57844 00:04:38.281 13:32:30 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57844 ']' 00:04:38.281 13:32:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57844 00:04:38.281 13:32:30 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:38.281 13:32:30 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:38.281 13:32:30 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57844 00:04:38.281 killing process with pid 57844 00:04:38.281 13:32:30 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:38.281 13:32:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:38.281 13:32:30 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57844' 00:04:38.281 13:32:30 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57844 00:04:38.281 13:32:30 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57844 00:04:38.847 ************************************ 00:04:38.847 END TEST dpdk_mem_utility 00:04:38.847 ************************************ 00:04:38.847 00:04:38.847 real 0m1.066s 00:04:38.847 user 0m1.121s 00:04:38.847 sys 0m0.308s 00:04:38.847 13:32:30 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.847 13:32:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.847 13:32:30 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:38.847 13:32:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.847 13:32:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.847 13:32:30 -- common/autotest_common.sh@10 -- # set +x 00:04:38.847 ************************************ 00:04:38.847 START TEST event 00:04:38.847 ************************************ 00:04:38.847 13:32:30 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:38.847 * Looking for test storage... 00:04:38.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:38.847 13:32:30 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:38.847 13:32:30 event -- common/autotest_common.sh@1681 -- # lcov --version 00:04:38.847 13:32:30 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:38.847 13:32:30 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:38.847 13:32:30 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.847 13:32:30 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.847 13:32:30 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.847 13:32:30 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.847 13:32:30 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.847 13:32:30 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.847 13:32:30 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.847 13:32:30 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.847 13:32:30 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.847 13:32:30 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.847 13:32:30 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.847 13:32:30 event -- scripts/common.sh@344 -- # case "$op" in 00:04:38.847 13:32:30 event -- scripts/common.sh@345 -- # : 1 00:04:38.847 13:32:30 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.847 13:32:30 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.847 13:32:30 event -- scripts/common.sh@365 -- # decimal 1 00:04:38.847 13:32:30 event -- scripts/common.sh@353 -- # local d=1 00:04:38.847 13:32:30 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.847 13:32:30 event -- scripts/common.sh@355 -- # echo 1 00:04:38.847 13:32:30 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.847 13:32:30 event -- scripts/common.sh@366 -- # decimal 2 00:04:38.847 13:32:30 event -- scripts/common.sh@353 -- # local d=2 00:04:38.847 13:32:30 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.847 13:32:30 event -- scripts/common.sh@355 -- # echo 2 00:04:38.847 13:32:30 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.847 13:32:30 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.847 13:32:30 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.847 13:32:30 event -- scripts/common.sh@368 -- # return 0 00:04:38.847 13:32:30 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.847 13:32:30 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:38.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.847 --rc genhtml_branch_coverage=1 00:04:38.847 --rc genhtml_function_coverage=1 00:04:38.847 --rc genhtml_legend=1 00:04:38.847 --rc geninfo_all_blocks=1 00:04:38.847 --rc geninfo_unexecuted_blocks=1 00:04:38.847 00:04:38.847 ' 00:04:38.847 13:32:30 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:38.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.847 --rc genhtml_branch_coverage=1 00:04:38.847 --rc genhtml_function_coverage=1 00:04:38.847 --rc genhtml_legend=1 00:04:38.847 --rc geninfo_all_blocks=1 00:04:38.847 --rc geninfo_unexecuted_blocks=1 00:04:38.847 00:04:38.847 ' 00:04:38.847 13:32:30 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:38.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.847 --rc genhtml_branch_coverage=1 00:04:38.847 --rc genhtml_function_coverage=1 00:04:38.847 --rc genhtml_legend=1 00:04:38.847 --rc geninfo_all_blocks=1 00:04:38.847 --rc geninfo_unexecuted_blocks=1 00:04:38.847 00:04:38.847 ' 00:04:38.847 13:32:30 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:38.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.847 --rc genhtml_branch_coverage=1 00:04:38.847 --rc genhtml_function_coverage=1 00:04:38.847 --rc genhtml_legend=1 00:04:38.847 --rc geninfo_all_blocks=1 00:04:38.847 --rc geninfo_unexecuted_blocks=1 00:04:38.847 00:04:38.847 ' 00:04:38.847 13:32:30 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:38.847 13:32:30 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:38.847 13:32:30 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.847 13:32:30 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:38.847 13:32:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.847 13:32:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.847 ************************************ 00:04:38.847 START TEST event_perf 00:04:38.847 ************************************ 00:04:38.847 13:32:30 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.847 Running I/O for 1 seconds...[2024-10-01 13:32:30.653739] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:38.847 [2024-10-01 13:32:30.653831] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57916 ] 00:04:39.105 [2024-10-01 13:32:30.787112] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:39.105 [2024-10-01 13:32:30.849213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.105 Running I/O for 1 seconds...[2024-10-01 13:32:30.849361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:39.105 [2024-10-01 13:32:30.849478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:04:39.105 [2024-10-01 13:32:30.849479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.480 00:04:40.480 lcore 0: 187423 00:04:40.480 lcore 1: 187421 00:04:40.480 lcore 2: 187421 00:04:40.480 lcore 3: 187422 00:04:40.480 done. 00:04:40.480 00:04:40.480 real 0m1.284s 00:04:40.480 user 0m4.115s 00:04:40.480 sys 0m0.048s 00:04:40.480 13:32:31 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.480 13:32:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:40.480 ************************************ 00:04:40.480 END TEST event_perf 00:04:40.480 ************************************ 00:04:40.480 13:32:31 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:40.480 13:32:31 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:40.480 13:32:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.480 13:32:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.480 ************************************ 00:04:40.480 START TEST event_reactor 00:04:40.480 ************************************ 00:04:40.480 13:32:31 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:40.480 [2024-10-01 13:32:31.993009] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:40.480 [2024-10-01 13:32:31.993102] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57955 ] 00:04:40.480 [2024-10-01 13:32:32.129130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.480 [2024-10-01 13:32:32.189234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.429 test_start 00:04:41.429 oneshot 00:04:41.429 tick 100 00:04:41.429 tick 100 00:04:41.429 tick 250 00:04:41.429 tick 100 00:04:41.429 tick 100 00:04:41.429 tick 100 00:04:41.429 tick 250 00:04:41.429 tick 500 00:04:41.429 tick 100 00:04:41.429 tick 100 00:04:41.429 tick 250 00:04:41.429 tick 100 00:04:41.429 tick 100 00:04:41.429 test_end 00:04:41.429 00:04:41.429 real 0m1.287s 00:04:41.429 user 0m1.133s 00:04:41.429 sys 0m0.048s 00:04:41.429 13:32:33 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.429 13:32:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:41.429 ************************************ 00:04:41.429 END TEST event_reactor 00:04:41.429 ************************************ 00:04:41.689 13:32:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:41.689 13:32:33 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:41.689 13:32:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.689 13:32:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.689 ************************************ 00:04:41.689 START TEST event_reactor_perf 00:04:41.689 ************************************ 00:04:41.689 13:32:33 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:41.689 [2024-10-01 13:32:33.330810] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:41.689 [2024-10-01 13:32:33.330901] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57985 ] 00:04:41.689 [2024-10-01 13:32:33.467474] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.689 [2024-10-01 13:32:33.526992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.114 test_start 00:04:43.114 test_end 00:04:43.114 Performance: 361952 events per second 00:04:43.114 00:04:43.114 real 0m1.286s 00:04:43.114 user 0m1.142s 00:04:43.114 sys 0m0.037s 00:04:43.114 13:32:34 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.114 ************************************ 00:04:43.114 END TEST event_reactor_perf 00:04:43.114 ************************************ 00:04:43.114 13:32:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:43.114 13:32:34 event -- event/event.sh@49 -- # uname -s 00:04:43.114 13:32:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:43.114 13:32:34 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:43.114 13:32:34 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.114 13:32:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.114 13:32:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.114 ************************************ 00:04:43.114 START TEST event_scheduler 00:04:43.114 ************************************ 00:04:43.114 13:32:34 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:43.114 * Looking for test storage... 00:04:43.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:43.114 13:32:34 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:43.114 13:32:34 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:04:43.114 13:32:34 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:43.114 13:32:34 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.114 13:32:34 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:43.114 13:32:34 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.114 13:32:34 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:43.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.114 --rc genhtml_branch_coverage=1 00:04:43.114 --rc genhtml_function_coverage=1 00:04:43.114 --rc genhtml_legend=1 00:04:43.114 --rc geninfo_all_blocks=1 00:04:43.114 --rc geninfo_unexecuted_blocks=1 00:04:43.114 00:04:43.114 ' 00:04:43.114 13:32:34 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:43.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.114 --rc genhtml_branch_coverage=1 00:04:43.114 --rc genhtml_function_coverage=1 00:04:43.114 --rc genhtml_legend=1 00:04:43.114 --rc geninfo_all_blocks=1 00:04:43.114 --rc geninfo_unexecuted_blocks=1 00:04:43.114 00:04:43.114 ' 00:04:43.114 13:32:34 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:43.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.114 --rc genhtml_branch_coverage=1 00:04:43.114 --rc genhtml_function_coverage=1 00:04:43.114 --rc genhtml_legend=1 00:04:43.114 --rc geninfo_all_blocks=1 00:04:43.114 --rc geninfo_unexecuted_blocks=1 00:04:43.114 00:04:43.114 ' 00:04:43.114 13:32:34 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:43.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.114 --rc genhtml_branch_coverage=1 00:04:43.114 --rc genhtml_function_coverage=1 00:04:43.114 --rc genhtml_legend=1 00:04:43.114 --rc geninfo_all_blocks=1 00:04:43.114 --rc geninfo_unexecuted_blocks=1 00:04:43.114 00:04:43.114 ' 00:04:43.115 13:32:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:43.115 13:32:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58060 00:04:43.115 13:32:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.115 13:32:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:43.115 13:32:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58060 00:04:43.115 13:32:34 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58060 ']' 00:04:43.115 13:32:34 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.115 13:32:34 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.115 13:32:34 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.115 13:32:34 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.115 13:32:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.115 [2024-10-01 13:32:34.902989] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:43.115 [2024-10-01 13:32:34.903097] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58060 ] 00:04:43.373 [2024-10-01 13:32:35.045210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.373 [2024-10-01 13:32:35.118033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.373 [2024-10-01 13:32:35.118178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.373 [2024-10-01 13:32:35.118295] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.373 [2024-10-01 13:32:35.119042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.305 13:32:35 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.305 13:32:35 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:44.305 13:32:35 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:44.305 13:32:35 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.305 13:32:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.305 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.305 POWER: Cannot set governor of lcore 0 to userspace 00:04:44.305 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.305 POWER: Cannot set governor of lcore 0 to performance 00:04:44.305 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.305 POWER: Cannot set governor of lcore 0 to userspace 00:04:44.306 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.306 POWER: Cannot set governor of lcore 0 to userspace 00:04:44.306 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:44.306 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:44.306 POWER: Unable to set Power Management Environment for lcore 0 00:04:44.306 [2024-10-01 13:32:35.881367] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:44.306 [2024-10-01 13:32:35.881505] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:44.306 [2024-10-01 13:32:35.881711] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:44.306 [2024-10-01 13:32:35.881745] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:44.306 [2024-10-01 13:32:35.881761] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:44.306 [2024-10-01 13:32:35.881774] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:44.306 13:32:35 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.306 13:32:35 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:44.306 13:32:35 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.306 13:32:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.306 [2024-10-01 13:32:35.917281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:44.306 [2024-10-01 13:32:35.936427] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:44.306 13:32:35 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.306 13:32:35 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:44.306 13:32:35 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.306 13:32:35 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.306 13:32:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.306 ************************************ 00:04:44.306 START TEST scheduler_create_thread 00:04:44.306 ************************************ 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.306 2 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.306 3 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.306 4 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.306 5 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.306 6 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.306 7 00:04:44.306 13:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.306 8 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.306 9 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.306 10 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.306 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.240 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.240 13:32:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:45.240 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.240 13:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.614 13:32:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.614 13:32:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:46.614 13:32:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:46.614 13:32:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.614 13:32:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.549 ************************************ 00:04:47.549 END TEST scheduler_create_thread 00:04:47.549 ************************************ 00:04:47.549 13:32:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.549 00:04:47.549 real 0m3.372s 00:04:47.549 user 0m0.018s 00:04:47.549 sys 0m0.008s 00:04:47.549 13:32:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.549 13:32:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.549 13:32:39 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:47.549 13:32:39 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58060 00:04:47.549 13:32:39 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58060 ']' 00:04:47.549 13:32:39 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58060 00:04:47.549 13:32:39 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:47.549 13:32:39 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.549 13:32:39 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58060 00:04:47.549 killing process with pid 58060 00:04:47.549 13:32:39 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:47.549 13:32:39 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:47.549 13:32:39 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58060' 00:04:47.549 13:32:39 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58060 00:04:47.549 13:32:39 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58060 00:04:48.117 [2024-10-01 13:32:39.700924] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:48.117 00:04:48.117 real 0m5.264s 00:04:48.117 user 0m10.785s 00:04:48.117 sys 0m0.336s 00:04:48.117 13:32:39 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.117 ************************************ 00:04:48.117 END TEST event_scheduler 00:04:48.117 ************************************ 00:04:48.117 13:32:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.117 13:32:39 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:48.117 13:32:39 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:48.117 13:32:39 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.117 13:32:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.117 13:32:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.117 ************************************ 00:04:48.117 START TEST app_repeat 00:04:48.117 ************************************ 00:04:48.117 13:32:39 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:48.117 13:32:39 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.117 13:32:39 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.117 13:32:39 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:48.117 13:32:39 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.117 13:32:39 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:48.117 13:32:39 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:48.117 13:32:39 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:48.377 13:32:39 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58165 00:04:48.377 13:32:39 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.377 13:32:39 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:48.377 Process app_repeat pid: 58165 00:04:48.377 13:32:39 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58165' 00:04:48.377 13:32:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:48.377 spdk_app_start Round 0 00:04:48.377 13:32:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:48.377 13:32:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58165 /var/tmp/spdk-nbd.sock 00:04:48.377 13:32:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58165 ']' 00:04:48.377 13:32:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:48.377 13:32:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:48.377 13:32:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:48.377 13:32:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.377 13:32:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:48.377 [2024-10-01 13:32:40.007218] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:04:48.377 [2024-10-01 13:32:40.007311] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58165 ] 00:04:48.377 [2024-10-01 13:32:40.142408] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.377 [2024-10-01 13:32:40.200927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.377 [2024-10-01 13:32:40.200938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.377 [2024-10-01 13:32:40.231005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:48.635 13:32:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.635 13:32:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:48.635 13:32:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.894 Malloc0 00:04:48.894 13:32:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.153 Malloc1 00:04:49.153 13:32:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.153 13:32:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.153 13:32:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.153 13:32:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:49.153 13:32:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.153 13:32:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:49.153 13:32:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.153 13:32:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.153 13:32:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.153 13:32:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:49.153 13:32:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.153 13:32:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:49.153 13:32:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:49.153 13:32:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:49.153 13:32:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.153 13:32:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:49.412 /dev/nbd0 00:04:49.412 13:32:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:49.412 13:32:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:49.412 13:32:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:49.412 13:32:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:49.412 13:32:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:49.412 13:32:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:49.412 13:32:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:49.412 13:32:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:49.412 13:32:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:49.412 13:32:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:49.412 13:32:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.412 1+0 records in 00:04:49.412 1+0 records out 00:04:49.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310949 s, 13.2 MB/s 00:04:49.412 13:32:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.412 13:32:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:49.412 13:32:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.412 13:32:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:49.412 13:32:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:49.412 13:32:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.412 13:32:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.412 13:32:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:49.672 /dev/nbd1 00:04:49.672 13:32:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:49.672 13:32:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:49.672 13:32:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:49.672 13:32:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:49.672 13:32:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:49.672 13:32:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:49.672 13:32:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:49.672 13:32:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:49.672 13:32:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:49.672 13:32:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:49.672 13:32:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.672 1+0 records in 00:04:49.672 1+0 records out 00:04:49.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315641 s, 13.0 MB/s 00:04:49.672 13:32:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.672 13:32:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:49.672 13:32:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.672 13:32:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:49.672 13:32:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:49.672 13:32:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.672 13:32:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.672 13:32:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.672 13:32:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.672 13:32:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:49.932 { 00:04:49.932 "nbd_device": "/dev/nbd0", 00:04:49.932 "bdev_name": "Malloc0" 00:04:49.932 }, 00:04:49.932 { 00:04:49.932 "nbd_device": "/dev/nbd1", 00:04:49.932 "bdev_name": "Malloc1" 00:04:49.932 } 00:04:49.932 ]' 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:49.932 { 00:04:49.932 "nbd_device": "/dev/nbd0", 00:04:49.932 "bdev_name": "Malloc0" 00:04:49.932 }, 00:04:49.932 { 00:04:49.932 "nbd_device": "/dev/nbd1", 00:04:49.932 "bdev_name": "Malloc1" 00:04:49.932 } 00:04:49.932 ]' 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:49.932 /dev/nbd1' 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:49.932 /dev/nbd1' 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:49.932 256+0 records in 00:04:49.932 256+0 records out 00:04:49.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105943 s, 99.0 MB/s 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:49.932 256+0 records in 00:04:49.932 256+0 records out 00:04:49.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203514 s, 51.5 MB/s 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.932 13:32:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:50.192 256+0 records in 00:04:50.192 256+0 records out 00:04:50.192 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265188 s, 39.5 MB/s 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.192 13:32:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:50.451 13:32:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:50.451 13:32:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:50.451 13:32:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:50.451 13:32:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.451 13:32:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.451 13:32:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:50.452 13:32:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.452 13:32:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.452 13:32:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.452 13:32:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:50.711 13:32:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:50.711 13:32:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:50.711 13:32:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:50.711 13:32:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.711 13:32:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.711 13:32:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:50.711 13:32:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.711 13:32:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.711 13:32:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.711 13:32:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.711 13:32:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.969 13:32:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:50.969 13:32:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:50.969 13:32:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.969 13:32:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:50.969 13:32:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:50.969 13:32:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.970 13:32:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:50.970 13:32:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:50.970 13:32:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:50.970 13:32:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:50.970 13:32:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:50.970 13:32:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:50.970 13:32:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:51.229 13:32:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:51.488 [2024-10-01 13:32:43.180545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.488 [2024-10-01 13:32:43.232909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.488 [2024-10-01 13:32:43.232921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.488 [2024-10-01 13:32:43.260475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:51.488 [2024-10-01 13:32:43.260611] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:51.488 [2024-10-01 13:32:43.260626] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:54.777 13:32:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:54.777 spdk_app_start Round 1 00:04:54.777 13:32:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:54.777 13:32:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58165 /var/tmp/spdk-nbd.sock 00:04:54.777 13:32:46 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58165 ']' 00:04:54.777 13:32:46 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.777 13:32:46 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.777 13:32:46 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.777 13:32:46 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.777 13:32:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.777 13:32:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.777 13:32:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:54.777 13:32:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.777 Malloc0 00:04:55.036 13:32:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.296 Malloc1 00:04:55.296 13:32:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.296 13:32:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.296 13:32:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.296 13:32:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.296 13:32:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.296 13:32:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.296 13:32:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.296 13:32:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.296 13:32:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.296 13:32:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.296 13:32:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.296 13:32:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.296 13:32:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:55.296 13:32:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.296 13:32:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.296 13:32:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.556 /dev/nbd0 00:04:55.556 13:32:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.556 13:32:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.556 13:32:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:55.556 13:32:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:55.556 13:32:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:55.556 13:32:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:55.556 13:32:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:55.556 13:32:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:55.556 13:32:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:55.556 13:32:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:55.556 13:32:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.556 1+0 records in 00:04:55.556 1+0 records out 00:04:55.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296508 s, 13.8 MB/s 00:04:55.556 13:32:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.556 13:32:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:55.556 13:32:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.556 13:32:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:55.556 13:32:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:55.556 13:32:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.556 13:32:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.556 13:32:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:55.815 /dev/nbd1 00:04:55.815 13:32:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:55.815 13:32:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:55.815 13:32:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:55.815 13:32:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:55.815 13:32:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:55.816 13:32:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:55.816 13:32:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:55.816 13:32:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:55.816 13:32:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:55.816 13:32:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:55.816 13:32:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.816 1+0 records in 00:04:55.816 1+0 records out 00:04:55.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350948 s, 11.7 MB/s 00:04:55.816 13:32:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.816 13:32:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:55.816 13:32:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.816 13:32:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:55.816 13:32:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:55.816 13:32:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.816 13:32:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.816 13:32:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.816 13:32:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.816 13:32:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.075 { 00:04:56.075 "nbd_device": "/dev/nbd0", 00:04:56.075 "bdev_name": "Malloc0" 00:04:56.075 }, 00:04:56.075 { 00:04:56.075 "nbd_device": "/dev/nbd1", 00:04:56.075 "bdev_name": "Malloc1" 00:04:56.075 } 00:04:56.075 ]' 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.075 { 00:04:56.075 "nbd_device": "/dev/nbd0", 00:04:56.075 "bdev_name": "Malloc0" 00:04:56.075 }, 00:04:56.075 { 00:04:56.075 "nbd_device": "/dev/nbd1", 00:04:56.075 "bdev_name": "Malloc1" 00:04:56.075 } 00:04:56.075 ]' 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.075 /dev/nbd1' 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.075 /dev/nbd1' 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.075 256+0 records in 00:04:56.075 256+0 records out 00:04:56.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00733909 s, 143 MB/s 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.075 13:32:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.334 256+0 records in 00:04:56.334 256+0 records out 00:04:56.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232725 s, 45.1 MB/s 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.334 256+0 records in 00:04:56.334 256+0 records out 00:04:56.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263179 s, 39.8 MB/s 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.334 13:32:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.594 13:32:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.594 13:32:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.594 13:32:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.594 13:32:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.594 13:32:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.594 13:32:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.594 13:32:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.594 13:32:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.594 13:32:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.594 13:32:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:56.853 13:32:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:56.853 13:32:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:56.853 13:32:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:56.853 13:32:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.853 13:32:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.853 13:32:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:56.853 13:32:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.853 13:32:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.853 13:32:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.853 13:32:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.853 13:32:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.112 13:32:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:57.112 13:32:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.112 13:32:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:57.112 13:32:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:57.112 13:32:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:57.112 13:32:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.112 13:32:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:57.112 13:32:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:57.112 13:32:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:57.112 13:32:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:57.112 13:32:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:57.112 13:32:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:57.112 13:32:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:57.374 13:32:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:57.633 [2024-10-01 13:32:49.301256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.633 [2024-10-01 13:32:49.353060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.633 [2024-10-01 13:32:49.353074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.633 [2024-10-01 13:32:49.384029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:57.633 [2024-10-01 13:32:49.384143] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:57.633 [2024-10-01 13:32:49.384157] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.922 13:32:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.922 spdk_app_start Round 2 00:05:00.922 13:32:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:00.922 13:32:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58165 /var/tmp/spdk-nbd.sock 00:05:00.922 13:32:52 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58165 ']' 00:05:00.922 13:32:52 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.922 13:32:52 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.922 13:32:52 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.922 13:32:52 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.922 13:32:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.922 13:32:52 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.922 13:32:52 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:00.922 13:32:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.922 Malloc0 00:05:01.181 13:32:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.439 Malloc1 00:05:01.439 13:32:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.439 13:32:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.439 13:32:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.439 13:32:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.439 13:32:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.439 13:32:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.439 13:32:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.439 13:32:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.439 13:32:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.439 13:32:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.439 13:32:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.439 13:32:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.439 13:32:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.439 13:32:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.439 13:32:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.439 13:32:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.698 /dev/nbd0 00:05:01.698 13:32:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.698 13:32:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.698 13:32:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:01.698 13:32:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:01.698 13:32:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:01.698 13:32:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:01.698 13:32:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:01.698 13:32:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:01.698 13:32:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:01.698 13:32:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:01.698 13:32:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.698 1+0 records in 00:05:01.698 1+0 records out 00:05:01.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032271 s, 12.7 MB/s 00:05:01.698 13:32:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.698 13:32:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:01.698 13:32:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.698 13:32:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:01.698 13:32:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:01.698 13:32:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.698 13:32:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.698 13:32:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.957 /dev/nbd1 00:05:01.957 13:32:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.957 13:32:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.957 13:32:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:01.957 13:32:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:01.957 13:32:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:01.957 13:32:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:01.957 13:32:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:01.957 13:32:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:01.957 13:32:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:01.957 13:32:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:01.957 13:32:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.957 1+0 records in 00:05:01.957 1+0 records out 00:05:01.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299296 s, 13.7 MB/s 00:05:01.957 13:32:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.957 13:32:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:01.957 13:32:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.957 13:32:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:01.957 13:32:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:01.957 13:32:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.957 13:32:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.957 13:32:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.957 13:32:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.957 13:32:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.217 13:32:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.217 { 00:05:02.217 "nbd_device": "/dev/nbd0", 00:05:02.217 "bdev_name": "Malloc0" 00:05:02.217 }, 00:05:02.217 { 00:05:02.217 "nbd_device": "/dev/nbd1", 00:05:02.217 "bdev_name": "Malloc1" 00:05:02.217 } 00:05:02.217 ]' 00:05:02.217 13:32:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.217 { 00:05:02.217 "nbd_device": "/dev/nbd0", 00:05:02.217 "bdev_name": "Malloc0" 00:05:02.217 }, 00:05:02.217 { 00:05:02.217 "nbd_device": "/dev/nbd1", 00:05:02.217 "bdev_name": "Malloc1" 00:05:02.217 } 00:05:02.217 ]' 00:05:02.217 13:32:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:02.217 /dev/nbd1' 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:02.217 /dev/nbd1' 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:02.217 256+0 records in 00:05:02.217 256+0 records out 00:05:02.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105989 s, 98.9 MB/s 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:02.217 256+0 records in 00:05:02.217 256+0 records out 00:05:02.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223324 s, 47.0 MB/s 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.217 13:32:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:02.217 256+0 records in 00:05:02.217 256+0 records out 00:05:02.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234906 s, 44.6 MB/s 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.476 13:32:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.735 13:32:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.735 13:32:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.735 13:32:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.735 13:32:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.735 13:32:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.735 13:32:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:02.735 13:32:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.735 13:32:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.735 13:32:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.735 13:32:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:02.994 13:32:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:02.994 13:32:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:02.994 13:32:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:02.994 13:32:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.994 13:32:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.994 13:32:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:02.994 13:32:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.994 13:32:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.994 13:32:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.994 13:32:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.994 13:32:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.253 13:32:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.253 13:32:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.253 13:32:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.253 13:32:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.253 13:32:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.253 13:32:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.253 13:32:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:03.253 13:32:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.253 13:32:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.254 13:32:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.254 13:32:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.254 13:32:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.254 13:32:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:03.513 13:32:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:03.772 [2024-10-01 13:32:55.393239] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.772 [2024-10-01 13:32:55.445411] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.772 [2024-10-01 13:32:55.445421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.772 [2024-10-01 13:32:55.473185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:03.772 [2024-10-01 13:32:55.473279] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:03.772 [2024-10-01 13:32:55.473291] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.063 13:32:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58165 /var/tmp/spdk-nbd.sock 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58165 ']' 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:07.063 13:32:58 event.app_repeat -- event/event.sh@39 -- # killprocess 58165 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58165 ']' 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58165 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58165 00:05:07.063 killing process with pid 58165 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58165' 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58165 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58165 00:05:07.063 spdk_app_start is called in Round 0. 00:05:07.063 Shutdown signal received, stop current app iteration 00:05:07.063 Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 reinitialization... 00:05:07.063 spdk_app_start is called in Round 1. 00:05:07.063 Shutdown signal received, stop current app iteration 00:05:07.063 Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 reinitialization... 00:05:07.063 spdk_app_start is called in Round 2. 00:05:07.063 Shutdown signal received, stop current app iteration 00:05:07.063 Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 reinitialization... 00:05:07.063 spdk_app_start is called in Round 3. 00:05:07.063 Shutdown signal received, stop current app iteration 00:05:07.063 13:32:58 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:07.063 13:32:58 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:07.063 00:05:07.063 real 0m18.752s 00:05:07.063 user 0m42.988s 00:05:07.063 sys 0m2.588s 00:05:07.063 ************************************ 00:05:07.063 END TEST app_repeat 00:05:07.063 ************************************ 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.063 13:32:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.063 13:32:58 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:07.063 13:32:58 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:07.063 13:32:58 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.063 13:32:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.063 13:32:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.063 ************************************ 00:05:07.063 START TEST cpu_locks 00:05:07.063 ************************************ 00:05:07.063 13:32:58 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:07.063 * Looking for test storage... 00:05:07.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:07.063 13:32:58 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:07.063 13:32:58 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:07.063 13:32:58 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:07.324 13:32:58 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.324 13:32:58 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:07.324 13:32:58 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.324 13:32:58 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:07.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.324 --rc genhtml_branch_coverage=1 00:05:07.324 --rc genhtml_function_coverage=1 00:05:07.324 --rc genhtml_legend=1 00:05:07.324 --rc geninfo_all_blocks=1 00:05:07.324 --rc geninfo_unexecuted_blocks=1 00:05:07.324 00:05:07.324 ' 00:05:07.324 13:32:58 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:07.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.324 --rc genhtml_branch_coverage=1 00:05:07.324 --rc genhtml_function_coverage=1 00:05:07.324 --rc genhtml_legend=1 00:05:07.324 --rc geninfo_all_blocks=1 00:05:07.324 --rc geninfo_unexecuted_blocks=1 00:05:07.324 00:05:07.324 ' 00:05:07.324 13:32:58 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:07.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.324 --rc genhtml_branch_coverage=1 00:05:07.324 --rc genhtml_function_coverage=1 00:05:07.324 --rc genhtml_legend=1 00:05:07.324 --rc geninfo_all_blocks=1 00:05:07.324 --rc geninfo_unexecuted_blocks=1 00:05:07.324 00:05:07.324 ' 00:05:07.324 13:32:58 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:07.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.324 --rc genhtml_branch_coverage=1 00:05:07.324 --rc genhtml_function_coverage=1 00:05:07.324 --rc genhtml_legend=1 00:05:07.324 --rc geninfo_all_blocks=1 00:05:07.324 --rc geninfo_unexecuted_blocks=1 00:05:07.324 00:05:07.324 ' 00:05:07.324 13:32:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:07.324 13:32:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:07.324 13:32:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:07.324 13:32:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:07.324 13:32:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.324 13:32:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.324 13:32:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.324 ************************************ 00:05:07.324 START TEST default_locks 00:05:07.324 ************************************ 00:05:07.324 13:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:07.324 13:32:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58598 00:05:07.324 13:32:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.324 13:32:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58598 00:05:07.324 13:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58598 ']' 00:05:07.324 13:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.324 13:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.324 13:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.324 13:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.324 13:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.324 [2024-10-01 13:32:59.032386] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:07.324 [2024-10-01 13:32:59.032682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58598 ] 00:05:07.324 [2024-10-01 13:32:59.163203] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.585 [2024-10-01 13:32:59.214768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.585 [2024-10-01 13:32:59.250732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:07.585 13:32:59 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.585 13:32:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:07.585 13:32:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58598 00:05:07.585 13:32:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.585 13:32:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58598 00:05:08.152 13:32:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58598 00:05:08.152 13:32:59 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58598 ']' 00:05:08.152 13:32:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58598 00:05:08.152 13:32:59 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:08.153 13:32:59 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.153 13:32:59 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58598 00:05:08.153 killing process with pid 58598 00:05:08.153 13:32:59 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.153 13:32:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.153 13:32:59 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58598' 00:05:08.153 13:32:59 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58598 00:05:08.153 13:32:59 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58598 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58598 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58598 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:08.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58598 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58598 ']' 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.412 ERROR: process (pid: 58598) is no longer running 00:05:08.412 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58598) - No such process 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:08.412 13:33:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:08.412 00:05:08.412 real 0m1.127s 00:05:08.412 user 0m1.248s 00:05:08.413 sys 0m0.442s 00:05:08.413 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.413 13:33:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.413 ************************************ 00:05:08.413 END TEST default_locks 00:05:08.413 ************************************ 00:05:08.413 13:33:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:08.413 13:33:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.413 13:33:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.413 13:33:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.413 ************************************ 00:05:08.413 START TEST default_locks_via_rpc 00:05:08.413 ************************************ 00:05:08.413 13:33:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:08.413 13:33:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58637 00:05:08.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.413 13:33:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58637 00:05:08.413 13:33:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.413 13:33:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58637 ']' 00:05:08.413 13:33:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.413 13:33:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.413 13:33:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.413 13:33:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.413 13:33:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.413 [2024-10-01 13:33:00.220641] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:08.413 [2024-10-01 13:33:00.220740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58637 ] 00:05:08.672 [2024-10-01 13:33:00.358019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.672 [2024-10-01 13:33:00.413288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.672 [2024-10-01 13:33:00.450285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:09.606 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.606 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:09.606 13:33:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:09.606 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.607 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.607 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.607 13:33:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:09.607 13:33:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:09.607 13:33:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:09.607 13:33:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:09.607 13:33:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:09.607 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.607 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.607 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.607 13:33:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58637 00:05:09.607 13:33:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58637 00:05:09.607 13:33:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.865 13:33:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58637 00:05:09.865 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58637 ']' 00:05:09.865 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58637 00:05:09.865 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:09.865 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.865 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58637 00:05:09.865 killing process with pid 58637 00:05:09.865 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.865 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.865 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58637' 00:05:09.865 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58637 00:05:09.865 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58637 00:05:10.143 ************************************ 00:05:10.143 END TEST default_locks_via_rpc 00:05:10.143 ************************************ 00:05:10.143 00:05:10.143 real 0m1.637s 00:05:10.143 user 0m1.875s 00:05:10.143 sys 0m0.400s 00:05:10.143 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.143 13:33:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.143 13:33:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:10.143 13:33:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.143 13:33:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.143 13:33:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.143 ************************************ 00:05:10.143 START TEST non_locking_app_on_locked_coremask 00:05:10.143 ************************************ 00:05:10.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.143 13:33:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:10.143 13:33:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58688 00:05:10.143 13:33:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58688 /var/tmp/spdk.sock 00:05:10.143 13:33:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58688 ']' 00:05:10.143 13:33:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.143 13:33:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.143 13:33:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.143 13:33:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.143 13:33:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.143 13:33:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.143 [2024-10-01 13:33:01.917585] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:10.143 [2024-10-01 13:33:01.917701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58688 ] 00:05:10.401 [2024-10-01 13:33:02.056405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.401 [2024-10-01 13:33:02.108951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.401 [2024-10-01 13:33:02.145533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:10.660 13:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.660 13:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:10.660 13:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58691 00:05:10.660 13:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58691 /var/tmp/spdk2.sock 00:05:10.660 13:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:10.660 13:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58691 ']' 00:05:10.660 13:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.660 13:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.660 13:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.660 13:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.660 13:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.660 [2024-10-01 13:33:02.319440] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:10.660 [2024-10-01 13:33:02.319806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58691 ] 00:05:10.660 [2024-10-01 13:33:02.461384] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.660 [2024-10-01 13:33:02.461455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.919 [2024-10-01 13:33:02.574727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.919 [2024-10-01 13:33:02.649311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:11.485 13:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.485 13:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:11.485 13:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58688 00:05:11.485 13:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58688 00:05:11.485 13:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.418 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58688 00:05:12.418 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58688 ']' 00:05:12.418 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58688 00:05:12.418 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:12.418 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.418 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58688 00:05:12.675 killing process with pid 58688 00:05:12.675 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.675 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.675 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58688' 00:05:12.675 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58688 00:05:12.675 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58688 00:05:13.241 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58691 00:05:13.241 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58691 ']' 00:05:13.241 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58691 00:05:13.241 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:13.241 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.241 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58691 00:05:13.241 killing process with pid 58691 00:05:13.241 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.241 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.241 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58691' 00:05:13.241 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58691 00:05:13.241 13:33:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58691 00:05:13.500 ************************************ 00:05:13.500 END TEST non_locking_app_on_locked_coremask 00:05:13.500 ************************************ 00:05:13.500 00:05:13.500 real 0m3.253s 00:05:13.500 user 0m3.777s 00:05:13.500 sys 0m0.961s 00:05:13.500 13:33:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.500 13:33:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.500 13:33:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:13.500 13:33:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.500 13:33:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.500 13:33:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.500 ************************************ 00:05:13.500 START TEST locking_app_on_unlocked_coremask 00:05:13.500 ************************************ 00:05:13.500 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:13.500 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58758 00:05:13.500 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58758 /var/tmp/spdk.sock 00:05:13.500 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:13.500 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58758 ']' 00:05:13.500 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.500 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.501 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.501 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.501 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.501 [2024-10-01 13:33:05.220280] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:13.501 [2024-10-01 13:33:05.220375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58758 ] 00:05:13.501 [2024-10-01 13:33:05.353782] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.501 [2024-10-01 13:33:05.354065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.759 [2024-10-01 13:33:05.412304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.759 [2024-10-01 13:33:05.451173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.759 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.759 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:13.759 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58767 00:05:13.759 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58767 /var/tmp/spdk2.sock 00:05:13.759 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58767 ']' 00:05:13.759 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:13.759 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.759 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.759 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.759 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.759 13:33:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.017 [2024-10-01 13:33:05.651365] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:14.017 [2024-10-01 13:33:05.651505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58767 ] 00:05:14.017 [2024-10-01 13:33:05.803247] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.276 [2024-10-01 13:33:05.917609] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.276 [2024-10-01 13:33:05.996626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.842 13:33:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.842 13:33:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:14.842 13:33:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58767 00:05:14.842 13:33:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58767 00:05:14.842 13:33:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.779 13:33:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58758 00:05:15.779 13:33:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58758 ']' 00:05:15.779 13:33:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58758 00:05:15.779 13:33:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:15.779 13:33:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.779 13:33:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58758 00:05:15.779 13:33:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:15.779 13:33:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:15.779 13:33:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58758' 00:05:15.779 killing process with pid 58758 00:05:15.779 13:33:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58758 00:05:15.779 13:33:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58758 00:05:16.346 13:33:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58767 00:05:16.346 13:33:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58767 ']' 00:05:16.346 13:33:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58767 00:05:16.346 13:33:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:16.346 13:33:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.347 13:33:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58767 00:05:16.347 13:33:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.347 13:33:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.347 13:33:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58767' 00:05:16.347 killing process with pid 58767 00:05:16.347 13:33:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58767 00:05:16.347 13:33:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58767 00:05:16.605 00:05:16.606 real 0m3.175s 00:05:16.606 user 0m3.741s 00:05:16.606 sys 0m0.896s 00:05:16.606 ************************************ 00:05:16.606 13:33:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.606 13:33:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.606 END TEST locking_app_on_unlocked_coremask 00:05:16.606 ************************************ 00:05:16.606 13:33:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:16.606 13:33:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.606 13:33:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.606 13:33:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.606 ************************************ 00:05:16.606 START TEST locking_app_on_locked_coremask 00:05:16.606 ************************************ 00:05:16.606 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:16.606 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58828 00:05:16.606 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58828 /var/tmp/spdk.sock 00:05:16.606 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.606 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58828 ']' 00:05:16.606 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.606 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.606 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.606 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.606 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.864 [2024-10-01 13:33:08.479670] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:16.864 [2024-10-01 13:33:08.479814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58828 ] 00:05:16.864 [2024-10-01 13:33:08.617312] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.864 [2024-10-01 13:33:08.671314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.864 [2024-10-01 13:33:08.710321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:17.123 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.123 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:17.123 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58837 00:05:17.123 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:17.123 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58837 /var/tmp/spdk2.sock 00:05:17.123 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:17.123 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58837 /var/tmp/spdk2.sock 00:05:17.123 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:17.124 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.124 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:17.124 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.124 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58837 /var/tmp/spdk2.sock 00:05:17.124 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58837 ']' 00:05:17.124 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.124 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.124 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.124 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.124 13:33:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.124 [2024-10-01 13:33:08.907643] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:17.124 [2024-10-01 13:33:08.907753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58837 ] 00:05:17.383 [2024-10-01 13:33:09.051956] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58828 has claimed it. 00:05:17.383 [2024-10-01 13:33:09.052036] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:17.950 ERROR: process (pid: 58837) is no longer running 00:05:17.950 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58837) - No such process 00:05:17.950 13:33:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.950 13:33:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:17.951 13:33:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:17.951 13:33:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:17.951 13:33:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:17.951 13:33:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:17.951 13:33:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58828 00:05:17.951 13:33:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58828 00:05:17.951 13:33:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.518 13:33:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58828 00:05:18.518 13:33:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58828 ']' 00:05:18.518 13:33:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58828 00:05:18.518 13:33:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:18.518 13:33:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.518 13:33:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58828 00:05:18.518 13:33:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.518 killing process with pid 58828 00:05:18.518 13:33:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.518 13:33:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58828' 00:05:18.518 13:33:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58828 00:05:18.518 13:33:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58828 00:05:18.518 00:05:18.518 real 0m1.981s 00:05:18.518 user 0m2.383s 00:05:18.518 sys 0m0.510s 00:05:18.518 ************************************ 00:05:18.518 END TEST locking_app_on_locked_coremask 00:05:18.518 ************************************ 00:05:18.518 13:33:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.518 13:33:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.776 13:33:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:18.776 13:33:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.776 13:33:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.776 13:33:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.776 ************************************ 00:05:18.776 START TEST locking_overlapped_coremask 00:05:18.776 ************************************ 00:05:18.776 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:18.776 13:33:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58882 00:05:18.776 13:33:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58882 /var/tmp/spdk.sock 00:05:18.776 13:33:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:18.777 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 58882 ']' 00:05:18.777 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.777 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.777 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.777 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.777 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.777 [2024-10-01 13:33:10.482973] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:18.777 [2024-10-01 13:33:10.483081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58882 ] 00:05:18.777 [2024-10-01 13:33:10.623665] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:19.035 [2024-10-01 13:33:10.684340] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.036 [2024-10-01 13:33:10.684385] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.036 [2024-10-01 13:33:10.684392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.036 [2024-10-01 13:33:10.724193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58893 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58893 /var/tmp/spdk2.sock 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58893 /var/tmp/spdk2.sock 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58893 /var/tmp/spdk2.sock 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 58893 ']' 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.036 13:33:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.294 [2024-10-01 13:33:10.914181] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:19.294 [2024-10-01 13:33:10.914278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58893 ] 00:05:19.294 [2024-10-01 13:33:11.060102] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58882 has claimed it. 00:05:19.294 [2024-10-01 13:33:11.060178] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:19.861 ERROR: process (pid: 58893) is no longer running 00:05:19.861 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58893) - No such process 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58882 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 58882 ']' 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 58882 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.861 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58882 00:05:19.862 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.862 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.862 killing process with pid 58882 00:05:19.862 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58882' 00:05:19.862 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 58882 00:05:19.862 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 58882 00:05:20.121 00:05:20.121 real 0m1.498s 00:05:20.121 user 0m4.029s 00:05:20.121 sys 0m0.283s 00:05:20.121 ************************************ 00:05:20.121 END TEST locking_overlapped_coremask 00:05:20.121 ************************************ 00:05:20.121 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.121 13:33:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.121 13:33:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:20.121 13:33:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.121 13:33:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.121 13:33:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.121 ************************************ 00:05:20.121 START TEST locking_overlapped_coremask_via_rpc 00:05:20.121 ************************************ 00:05:20.121 13:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:20.121 13:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58933 00:05:20.121 13:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58933 /var/tmp/spdk.sock 00:05:20.121 13:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:20.121 13:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58933 ']' 00:05:20.121 13:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.121 13:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.121 13:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.121 13:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.121 13:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.395 [2024-10-01 13:33:12.033314] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:20.395 [2024-10-01 13:33:12.033404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58933 ] 00:05:20.395 [2024-10-01 13:33:12.167344] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.395 [2024-10-01 13:33:12.167396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.395 [2024-10-01 13:33:12.226947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.395 [2024-10-01 13:33:12.227058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.395 [2024-10-01 13:33:12.227062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.690 [2024-10-01 13:33:12.266938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.257 13:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.257 13:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:21.257 13:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58951 00:05:21.257 13:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58951 /var/tmp/spdk2.sock 00:05:21.257 13:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:21.257 13:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58951 ']' 00:05:21.257 13:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.257 13:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.257 13:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.257 13:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.257 13:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.516 [2024-10-01 13:33:13.144263] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:21.516 [2024-10-01 13:33:13.144387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58951 ] 00:05:21.516 [2024-10-01 13:33:13.299611] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:21.516 [2024-10-01 13:33:13.299668] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:21.774 [2024-10-01 13:33:13.420932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.774 [2024-10-01 13:33:13.424701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:05:21.774 [2024-10-01 13:33:13.424701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.774 [2024-10-01 13:33:13.503545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.341 [2024-10-01 13:33:14.181720] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58933 has claimed it. 00:05:22.341 request: 00:05:22.341 { 00:05:22.341 "method": "framework_enable_cpumask_locks", 00:05:22.341 "req_id": 1 00:05:22.341 } 00:05:22.341 Got JSON-RPC error response 00:05:22.341 response: 00:05:22.341 { 00:05:22.341 "code": -32603, 00:05:22.341 "message": "Failed to claim CPU core: 2" 00:05:22.341 } 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58933 /var/tmp/spdk.sock 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58933 ']' 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.341 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.599 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.599 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:22.599 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58951 /var/tmp/spdk2.sock 00:05:22.599 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58951 ']' 00:05:22.599 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.599 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.599 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.599 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.599 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.857 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.857 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:22.857 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:22.857 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:22.857 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:22.858 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:22.858 00:05:22.858 real 0m2.743s 00:05:22.858 user 0m1.500s 00:05:22.858 sys 0m0.175s 00:05:22.858 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.858 13:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.858 ************************************ 00:05:22.858 END TEST locking_overlapped_coremask_via_rpc 00:05:22.858 ************************************ 00:05:23.117 13:33:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:23.117 13:33:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58933 ]] 00:05:23.117 13:33:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58933 00:05:23.117 13:33:14 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 58933 ']' 00:05:23.117 13:33:14 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 58933 00:05:23.117 13:33:14 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:23.117 13:33:14 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.117 13:33:14 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58933 00:05:23.117 13:33:14 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:23.117 13:33:14 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:23.117 killing process with pid 58933 00:05:23.117 13:33:14 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58933' 00:05:23.117 13:33:14 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 58933 00:05:23.117 13:33:14 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 58933 00:05:23.376 13:33:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58951 ]] 00:05:23.376 13:33:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58951 00:05:23.376 13:33:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 58951 ']' 00:05:23.376 13:33:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 58951 00:05:23.376 13:33:15 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:23.376 13:33:15 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.376 13:33:15 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58951 00:05:23.376 13:33:15 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:23.376 13:33:15 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:23.376 killing process with pid 58951 00:05:23.376 13:33:15 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58951' 00:05:23.376 13:33:15 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 58951 00:05:23.376 13:33:15 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 58951 00:05:23.635 13:33:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:23.635 13:33:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:23.635 13:33:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58933 ]] 00:05:23.635 13:33:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58933 00:05:23.635 13:33:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 58933 ']' 00:05:23.635 13:33:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 58933 00:05:23.635 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (58933) - No such process 00:05:23.635 Process with pid 58933 is not found 00:05:23.635 13:33:15 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 58933 is not found' 00:05:23.635 13:33:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58951 ]] 00:05:23.635 13:33:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58951 00:05:23.635 13:33:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 58951 ']' 00:05:23.635 13:33:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 58951 00:05:23.635 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (58951) - No such process 00:05:23.635 Process with pid 58951 is not found 00:05:23.635 13:33:15 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 58951 is not found' 00:05:23.635 13:33:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:23.635 00:05:23.635 real 0m16.603s 00:05:23.635 user 0m31.026s 00:05:23.635 sys 0m4.390s 00:05:23.635 13:33:15 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.635 13:33:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.635 ************************************ 00:05:23.635 END TEST cpu_locks 00:05:23.635 ************************************ 00:05:23.635 00:05:23.635 real 0m44.964s 00:05:23.635 user 1m31.402s 00:05:23.635 sys 0m7.702s 00:05:23.635 13:33:15 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.635 13:33:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.635 ************************************ 00:05:23.635 END TEST event 00:05:23.635 ************************************ 00:05:23.635 13:33:15 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:23.635 13:33:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.635 13:33:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.635 13:33:15 -- common/autotest_common.sh@10 -- # set +x 00:05:23.635 ************************************ 00:05:23.635 START TEST thread 00:05:23.635 ************************************ 00:05:23.635 13:33:15 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:23.894 * Looking for test storage... 00:05:23.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:23.894 13:33:15 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:23.894 13:33:15 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:23.894 13:33:15 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:23.894 13:33:15 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:23.894 13:33:15 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.894 13:33:15 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.894 13:33:15 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.894 13:33:15 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.894 13:33:15 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.894 13:33:15 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.894 13:33:15 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.894 13:33:15 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.894 13:33:15 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.894 13:33:15 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.894 13:33:15 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.894 13:33:15 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:23.894 13:33:15 thread -- scripts/common.sh@345 -- # : 1 00:05:23.894 13:33:15 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.894 13:33:15 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.894 13:33:15 thread -- scripts/common.sh@365 -- # decimal 1 00:05:23.894 13:33:15 thread -- scripts/common.sh@353 -- # local d=1 00:05:23.894 13:33:15 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.894 13:33:15 thread -- scripts/common.sh@355 -- # echo 1 00:05:23.894 13:33:15 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.894 13:33:15 thread -- scripts/common.sh@366 -- # decimal 2 00:05:23.894 13:33:15 thread -- scripts/common.sh@353 -- # local d=2 00:05:23.894 13:33:15 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.894 13:33:15 thread -- scripts/common.sh@355 -- # echo 2 00:05:23.894 13:33:15 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.894 13:33:15 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.894 13:33:15 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.894 13:33:15 thread -- scripts/common.sh@368 -- # return 0 00:05:23.894 13:33:15 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.894 13:33:15 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:23.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.894 --rc genhtml_branch_coverage=1 00:05:23.894 --rc genhtml_function_coverage=1 00:05:23.894 --rc genhtml_legend=1 00:05:23.894 --rc geninfo_all_blocks=1 00:05:23.894 --rc geninfo_unexecuted_blocks=1 00:05:23.894 00:05:23.894 ' 00:05:23.894 13:33:15 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:23.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.894 --rc genhtml_branch_coverage=1 00:05:23.894 --rc genhtml_function_coverage=1 00:05:23.894 --rc genhtml_legend=1 00:05:23.894 --rc geninfo_all_blocks=1 00:05:23.894 --rc geninfo_unexecuted_blocks=1 00:05:23.894 00:05:23.894 ' 00:05:23.894 13:33:15 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:23.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.894 --rc genhtml_branch_coverage=1 00:05:23.894 --rc genhtml_function_coverage=1 00:05:23.894 --rc genhtml_legend=1 00:05:23.894 --rc geninfo_all_blocks=1 00:05:23.894 --rc geninfo_unexecuted_blocks=1 00:05:23.894 00:05:23.894 ' 00:05:23.894 13:33:15 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:23.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.894 --rc genhtml_branch_coverage=1 00:05:23.894 --rc genhtml_function_coverage=1 00:05:23.894 --rc genhtml_legend=1 00:05:23.894 --rc geninfo_all_blocks=1 00:05:23.894 --rc geninfo_unexecuted_blocks=1 00:05:23.894 00:05:23.894 ' 00:05:23.894 13:33:15 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:23.894 13:33:15 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:23.894 13:33:15 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.894 13:33:15 thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.894 ************************************ 00:05:23.894 START TEST thread_poller_perf 00:05:23.894 ************************************ 00:05:23.894 13:33:15 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:23.894 [2024-10-01 13:33:15.659273] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:23.895 [2024-10-01 13:33:15.659398] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59081 ] 00:05:24.153 [2024-10-01 13:33:15.800089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.153 [2024-10-01 13:33:15.858641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.153 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:25.087 ====================================== 00:05:25.087 busy:2209641641 (cyc) 00:05:25.087 total_run_count: 306000 00:05:25.087 tsc_hz: 2200000000 (cyc) 00:05:25.087 ====================================== 00:05:25.087 poller_cost: 7221 (cyc), 3282 (nsec) 00:05:25.087 00:05:25.087 real 0m1.298s 00:05:25.087 user 0m1.143s 00:05:25.087 sys 0m0.048s 00:05:25.087 13:33:16 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.087 13:33:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.087 ************************************ 00:05:25.087 END TEST thread_poller_perf 00:05:25.087 ************************************ 00:05:25.345 13:33:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:25.345 13:33:16 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:25.345 13:33:16 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.345 13:33:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.345 ************************************ 00:05:25.345 START TEST thread_poller_perf 00:05:25.345 ************************************ 00:05:25.345 13:33:16 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:25.345 [2024-10-01 13:33:17.009001] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:25.345 [2024-10-01 13:33:17.009107] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59117 ] 00:05:25.345 [2024-10-01 13:33:17.146187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.345 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:25.345 [2024-10-01 13:33:17.204302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.722 ====================================== 00:05:26.722 busy:2201822401 (cyc) 00:05:26.722 total_run_count: 4079000 00:05:26.722 tsc_hz: 2200000000 (cyc) 00:05:26.722 ====================================== 00:05:26.722 poller_cost: 539 (cyc), 245 (nsec) 00:05:26.722 00:05:26.722 real 0m1.289s 00:05:26.722 user 0m1.142s 00:05:26.722 sys 0m0.040s 00:05:26.722 13:33:18 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.722 13:33:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.722 ************************************ 00:05:26.722 END TEST thread_poller_perf 00:05:26.722 ************************************ 00:05:26.722 13:33:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:26.722 00:05:26.722 real 0m2.852s 00:05:26.722 user 0m2.420s 00:05:26.722 sys 0m0.216s 00:05:26.722 13:33:18 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.722 13:33:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.722 ************************************ 00:05:26.722 END TEST thread 00:05:26.722 ************************************ 00:05:26.722 13:33:18 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:26.722 13:33:18 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:26.722 13:33:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.722 13:33:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.722 13:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.722 ************************************ 00:05:26.722 START TEST app_cmdline 00:05:26.722 ************************************ 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:26.722 * Looking for test storage... 00:05:26.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.722 13:33:18 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:26.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.722 --rc genhtml_branch_coverage=1 00:05:26.722 --rc genhtml_function_coverage=1 00:05:26.722 --rc genhtml_legend=1 00:05:26.722 --rc geninfo_all_blocks=1 00:05:26.722 --rc geninfo_unexecuted_blocks=1 00:05:26.722 00:05:26.722 ' 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:26.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.722 --rc genhtml_branch_coverage=1 00:05:26.722 --rc genhtml_function_coverage=1 00:05:26.722 --rc genhtml_legend=1 00:05:26.722 --rc geninfo_all_blocks=1 00:05:26.722 --rc geninfo_unexecuted_blocks=1 00:05:26.722 00:05:26.722 ' 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:26.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.722 --rc genhtml_branch_coverage=1 00:05:26.722 --rc genhtml_function_coverage=1 00:05:26.722 --rc genhtml_legend=1 00:05:26.722 --rc geninfo_all_blocks=1 00:05:26.722 --rc geninfo_unexecuted_blocks=1 00:05:26.722 00:05:26.722 ' 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:26.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.722 --rc genhtml_branch_coverage=1 00:05:26.722 --rc genhtml_function_coverage=1 00:05:26.722 --rc genhtml_legend=1 00:05:26.722 --rc geninfo_all_blocks=1 00:05:26.722 --rc geninfo_unexecuted_blocks=1 00:05:26.722 00:05:26.722 ' 00:05:26.722 13:33:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:26.722 13:33:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59199 00:05:26.722 13:33:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59199 00:05:26.722 13:33:18 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59199 ']' 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.722 13:33:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:26.981 [2024-10-01 13:33:18.595045] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:26.981 [2024-10-01 13:33:18.595154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59199 ] 00:05:26.981 [2024-10-01 13:33:18.733952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.981 [2024-10-01 13:33:18.792357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.981 [2024-10-01 13:33:18.831752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.916 13:33:19 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.916 13:33:19 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:27.916 13:33:19 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:28.175 { 00:05:28.175 "version": "SPDK v25.01-pre git sha1 7b38c9ede", 00:05:28.175 "fields": { 00:05:28.175 "major": 25, 00:05:28.175 "minor": 1, 00:05:28.175 "patch": 0, 00:05:28.175 "suffix": "-pre", 00:05:28.175 "commit": "7b38c9ede" 00:05:28.175 } 00:05:28.175 } 00:05:28.175 13:33:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:28.176 13:33:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:28.176 13:33:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:28.176 13:33:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:28.176 13:33:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:28.176 13:33:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:28.176 13:33:19 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.176 13:33:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:28.176 13:33:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:28.176 13:33:19 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.176 13:33:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:28.176 13:33:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:28.176 13:33:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:28.176 13:33:19 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:28.176 13:33:19 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:28.176 13:33:19 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:28.176 13:33:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.176 13:33:19 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:28.176 13:33:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.176 13:33:19 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:28.176 13:33:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.176 13:33:19 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:28.176 13:33:19 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:28.176 13:33:19 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:28.435 request: 00:05:28.435 { 00:05:28.435 "method": "env_dpdk_get_mem_stats", 00:05:28.435 "req_id": 1 00:05:28.435 } 00:05:28.435 Got JSON-RPC error response 00:05:28.435 response: 00:05:28.435 { 00:05:28.435 "code": -32601, 00:05:28.435 "message": "Method not found" 00:05:28.435 } 00:05:28.435 13:33:20 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:28.435 13:33:20 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:28.435 13:33:20 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:28.435 13:33:20 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:28.435 13:33:20 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59199 00:05:28.435 13:33:20 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59199 ']' 00:05:28.435 13:33:20 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59199 00:05:28.435 13:33:20 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:28.435 13:33:20 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.435 13:33:20 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59199 00:05:28.435 13:33:20 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.435 13:33:20 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.435 killing process with pid 59199 00:05:28.435 13:33:20 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59199' 00:05:28.435 13:33:20 app_cmdline -- common/autotest_common.sh@969 -- # kill 59199 00:05:28.435 13:33:20 app_cmdline -- common/autotest_common.sh@974 -- # wait 59199 00:05:28.693 00:05:28.693 real 0m2.067s 00:05:28.693 user 0m2.698s 00:05:28.693 sys 0m0.373s 00:05:28.693 13:33:20 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.693 13:33:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:28.693 ************************************ 00:05:28.693 END TEST app_cmdline 00:05:28.693 ************************************ 00:05:28.693 13:33:20 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:28.693 13:33:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.693 13:33:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.693 13:33:20 -- common/autotest_common.sh@10 -- # set +x 00:05:28.693 ************************************ 00:05:28.693 START TEST version 00:05:28.693 ************************************ 00:05:28.693 13:33:20 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:28.693 * Looking for test storage... 00:05:28.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:28.693 13:33:20 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:28.693 13:33:20 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:28.952 13:33:20 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:28.952 13:33:20 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:28.952 13:33:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.952 13:33:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.952 13:33:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.952 13:33:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.952 13:33:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.952 13:33:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.952 13:33:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.952 13:33:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.953 13:33:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.953 13:33:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.953 13:33:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.953 13:33:20 version -- scripts/common.sh@344 -- # case "$op" in 00:05:28.953 13:33:20 version -- scripts/common.sh@345 -- # : 1 00:05:28.953 13:33:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.953 13:33:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.953 13:33:20 version -- scripts/common.sh@365 -- # decimal 1 00:05:28.953 13:33:20 version -- scripts/common.sh@353 -- # local d=1 00:05:28.953 13:33:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.953 13:33:20 version -- scripts/common.sh@355 -- # echo 1 00:05:28.953 13:33:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.953 13:33:20 version -- scripts/common.sh@366 -- # decimal 2 00:05:28.953 13:33:20 version -- scripts/common.sh@353 -- # local d=2 00:05:28.953 13:33:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.953 13:33:20 version -- scripts/common.sh@355 -- # echo 2 00:05:28.953 13:33:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.953 13:33:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.953 13:33:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.953 13:33:20 version -- scripts/common.sh@368 -- # return 0 00:05:28.953 13:33:20 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.953 13:33:20 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:28.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.953 --rc genhtml_branch_coverage=1 00:05:28.953 --rc genhtml_function_coverage=1 00:05:28.953 --rc genhtml_legend=1 00:05:28.953 --rc geninfo_all_blocks=1 00:05:28.953 --rc geninfo_unexecuted_blocks=1 00:05:28.953 00:05:28.953 ' 00:05:28.953 13:33:20 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:28.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.953 --rc genhtml_branch_coverage=1 00:05:28.953 --rc genhtml_function_coverage=1 00:05:28.953 --rc genhtml_legend=1 00:05:28.953 --rc geninfo_all_blocks=1 00:05:28.953 --rc geninfo_unexecuted_blocks=1 00:05:28.953 00:05:28.953 ' 00:05:28.953 13:33:20 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:28.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.953 --rc genhtml_branch_coverage=1 00:05:28.953 --rc genhtml_function_coverage=1 00:05:28.953 --rc genhtml_legend=1 00:05:28.953 --rc geninfo_all_blocks=1 00:05:28.953 --rc geninfo_unexecuted_blocks=1 00:05:28.953 00:05:28.953 ' 00:05:28.953 13:33:20 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:28.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.953 --rc genhtml_branch_coverage=1 00:05:28.953 --rc genhtml_function_coverage=1 00:05:28.953 --rc genhtml_legend=1 00:05:28.953 --rc geninfo_all_blocks=1 00:05:28.953 --rc geninfo_unexecuted_blocks=1 00:05:28.953 00:05:28.953 ' 00:05:28.953 13:33:20 version -- app/version.sh@17 -- # get_header_version major 00:05:28.953 13:33:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.953 13:33:20 version -- app/version.sh@14 -- # cut -f2 00:05:28.953 13:33:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:28.953 13:33:20 version -- app/version.sh@17 -- # major=25 00:05:28.953 13:33:20 version -- app/version.sh@18 -- # get_header_version minor 00:05:28.953 13:33:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:28.953 13:33:20 version -- app/version.sh@14 -- # cut -f2 00:05:28.953 13:33:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.953 13:33:20 version -- app/version.sh@18 -- # minor=1 00:05:28.953 13:33:20 version -- app/version.sh@19 -- # get_header_version patch 00:05:28.953 13:33:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:28.953 13:33:20 version -- app/version.sh@14 -- # cut -f2 00:05:28.953 13:33:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.953 13:33:20 version -- app/version.sh@19 -- # patch=0 00:05:28.953 13:33:20 version -- app/version.sh@20 -- # get_header_version suffix 00:05:28.953 13:33:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:28.953 13:33:20 version -- app/version.sh@14 -- # cut -f2 00:05:28.953 13:33:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.953 13:33:20 version -- app/version.sh@20 -- # suffix=-pre 00:05:28.953 13:33:20 version -- app/version.sh@22 -- # version=25.1 00:05:28.953 13:33:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:28.953 13:33:20 version -- app/version.sh@28 -- # version=25.1rc0 00:05:28.953 13:33:20 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:28.953 13:33:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:28.953 13:33:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:28.953 13:33:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:28.953 00:05:28.953 real 0m0.256s 00:05:28.953 user 0m0.170s 00:05:28.953 sys 0m0.120s 00:05:28.953 13:33:20 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.953 13:33:20 version -- common/autotest_common.sh@10 -- # set +x 00:05:28.953 ************************************ 00:05:28.953 END TEST version 00:05:28.953 ************************************ 00:05:28.953 13:33:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:28.953 13:33:20 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:28.953 13:33:20 -- spdk/autotest.sh@194 -- # uname -s 00:05:28.953 13:33:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:28.953 13:33:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:28.953 13:33:20 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:28.953 13:33:20 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:28.953 13:33:20 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:28.953 13:33:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.953 13:33:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.953 13:33:20 -- common/autotest_common.sh@10 -- # set +x 00:05:28.953 ************************************ 00:05:28.953 START TEST spdk_dd 00:05:28.953 ************************************ 00:05:28.953 13:33:20 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:29.212 * Looking for test storage... 00:05:29.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:29.212 13:33:20 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:29.212 13:33:20 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:05:29.212 13:33:20 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:29.212 13:33:20 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.212 13:33:20 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:29.213 13:33:20 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.213 13:33:20 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:29.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.213 --rc genhtml_branch_coverage=1 00:05:29.213 --rc genhtml_function_coverage=1 00:05:29.213 --rc genhtml_legend=1 00:05:29.213 --rc geninfo_all_blocks=1 00:05:29.213 --rc geninfo_unexecuted_blocks=1 00:05:29.213 00:05:29.213 ' 00:05:29.213 13:33:20 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:29.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.213 --rc genhtml_branch_coverage=1 00:05:29.213 --rc genhtml_function_coverage=1 00:05:29.213 --rc genhtml_legend=1 00:05:29.213 --rc geninfo_all_blocks=1 00:05:29.213 --rc geninfo_unexecuted_blocks=1 00:05:29.213 00:05:29.213 ' 00:05:29.213 13:33:20 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:29.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.213 --rc genhtml_branch_coverage=1 00:05:29.213 --rc genhtml_function_coverage=1 00:05:29.213 --rc genhtml_legend=1 00:05:29.213 --rc geninfo_all_blocks=1 00:05:29.213 --rc geninfo_unexecuted_blocks=1 00:05:29.213 00:05:29.213 ' 00:05:29.213 13:33:20 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:29.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.213 --rc genhtml_branch_coverage=1 00:05:29.213 --rc genhtml_function_coverage=1 00:05:29.213 --rc genhtml_legend=1 00:05:29.213 --rc geninfo_all_blocks=1 00:05:29.213 --rc geninfo_unexecuted_blocks=1 00:05:29.213 00:05:29.213 ' 00:05:29.213 13:33:20 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:29.213 13:33:20 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:29.213 13:33:20 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.213 13:33:20 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.213 13:33:20 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.213 13:33:20 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.213 13:33:20 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.213 13:33:20 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.213 13:33:20 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:29.213 13:33:20 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.213 13:33:20 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.471 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.471 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.471 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.471 13:33:21 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:29.471 13:33:21 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:29.471 13:33:21 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:29.731 13:33:21 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:29.731 13:33:21 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.731 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:29.732 * spdk_dd linked to liburing 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:29.732 13:33:21 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:05:29.732 13:33:21 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:05:29.733 13:33:21 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:05:29.733 13:33:21 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:05:29.733 13:33:21 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:05:29.733 13:33:21 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:05:29.733 13:33:21 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:05:29.733 13:33:21 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:05:29.733 13:33:21 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:05:29.733 13:33:21 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:29.733 13:33:21 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:05:29.733 13:33:21 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:05:29.733 13:33:21 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:05:29.733 13:33:21 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:29.733 13:33:21 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:29.733 13:33:21 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:29.733 13:33:21 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:29.733 13:33:21 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:29.733 13:33:21 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:29.733 13:33:21 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:29.733 13:33:21 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.733 13:33:21 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:29.733 ************************************ 00:05:29.733 START TEST spdk_dd_basic_rw 00:05:29.733 ************************************ 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:29.733 * Looking for test storage... 00:05:29.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.733 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:30.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.032 --rc genhtml_branch_coverage=1 00:05:30.032 --rc genhtml_function_coverage=1 00:05:30.032 --rc genhtml_legend=1 00:05:30.032 --rc geninfo_all_blocks=1 00:05:30.032 --rc geninfo_unexecuted_blocks=1 00:05:30.032 00:05:30.032 ' 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:30.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.032 --rc genhtml_branch_coverage=1 00:05:30.032 --rc genhtml_function_coverage=1 00:05:30.032 --rc genhtml_legend=1 00:05:30.032 --rc geninfo_all_blocks=1 00:05:30.032 --rc geninfo_unexecuted_blocks=1 00:05:30.032 00:05:30.032 ' 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:30.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.032 --rc genhtml_branch_coverage=1 00:05:30.032 --rc genhtml_function_coverage=1 00:05:30.032 --rc genhtml_legend=1 00:05:30.032 --rc geninfo_all_blocks=1 00:05:30.032 --rc geninfo_unexecuted_blocks=1 00:05:30.032 00:05:30.032 ' 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:30.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.032 --rc genhtml_branch_coverage=1 00:05:30.032 --rc genhtml_function_coverage=1 00:05:30.032 --rc genhtml_legend=1 00:05:30.032 --rc geninfo_all_blocks=1 00:05:30.032 --rc geninfo_unexecuted_blocks=1 00:05:30.032 00:05:30.032 ' 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:30.032 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:30.033 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:30.033 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:30.033 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:30.033 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:30.033 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:30.033 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:30.033 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:30.033 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:30.033 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:30.033 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:30.034 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:30.034 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:30.034 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:30.034 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:30.034 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:30.034 13:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:30.034 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:30.034 13:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.034 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:30.034 13:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:30.034 13:33:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:30.034 13:33:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:30.034 ************************************ 00:05:30.034 START TEST dd_bs_lt_native_bs 00:05:30.034 ************************************ 00:05:30.035 13:33:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:30.035 13:33:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:05:30.035 13:33:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:30.035 13:33:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:30.035 13:33:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.035 13:33:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:30.035 13:33:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.035 13:33:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:30.035 13:33:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.035 13:33:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:30.035 13:33:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:30.035 13:33:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:30.035 { 00:05:30.035 "subsystems": [ 00:05:30.035 { 00:05:30.035 "subsystem": "bdev", 00:05:30.035 "config": [ 00:05:30.035 { 00:05:30.035 "params": { 00:05:30.035 "trtype": "pcie", 00:05:30.035 "traddr": "0000:00:10.0", 00:05:30.035 "name": "Nvme0" 00:05:30.035 }, 00:05:30.035 "method": "bdev_nvme_attach_controller" 00:05:30.035 }, 00:05:30.035 { 00:05:30.035 "method": "bdev_wait_for_examine" 00:05:30.035 } 00:05:30.035 ] 00:05:30.035 } 00:05:30.035 ] 00:05:30.035 } 00:05:30.294 [2024-10-01 13:33:21.874220] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:30.294 [2024-10-01 13:33:21.874356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59545 ] 00:05:30.294 [2024-10-01 13:33:22.020066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.294 [2024-10-01 13:33:22.088090] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.294 [2024-10-01 13:33:22.120739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.552 [2024-10-01 13:33:22.215298] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:30.552 [2024-10-01 13:33:22.215384] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:30.552 [2024-10-01 13:33:22.292562] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:30.552 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:05:30.552 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.552 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:05:30.552 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:05:30.553 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:05:30.553 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.553 00:05:30.553 real 0m0.579s 00:05:30.553 user 0m0.396s 00:05:30.553 sys 0m0.136s 00:05:30.553 ************************************ 00:05:30.553 END TEST dd_bs_lt_native_bs 00:05:30.553 ************************************ 00:05:30.553 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.553 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:30.811 ************************************ 00:05:30.811 START TEST dd_rw 00:05:30.811 ************************************ 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:30.811 13:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:31.377 13:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:31.377 13:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:31.377 13:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:31.377 13:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:31.377 { 00:05:31.377 "subsystems": [ 00:05:31.377 { 00:05:31.377 "subsystem": "bdev", 00:05:31.377 "config": [ 00:05:31.377 { 00:05:31.377 "params": { 00:05:31.377 "trtype": "pcie", 00:05:31.377 "traddr": "0000:00:10.0", 00:05:31.377 "name": "Nvme0" 00:05:31.377 }, 00:05:31.377 "method": "bdev_nvme_attach_controller" 00:05:31.377 }, 00:05:31.377 { 00:05:31.377 "method": "bdev_wait_for_examine" 00:05:31.377 } 00:05:31.377 ] 00:05:31.377 } 00:05:31.377 ] 00:05:31.377 } 00:05:31.378 [2024-10-01 13:33:23.136608] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:31.378 [2024-10-01 13:33:23.136716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59582 ] 00:05:31.636 [2024-10-01 13:33:23.269949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.636 [2024-10-01 13:33:23.326454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.636 [2024-10-01 13:33:23.356386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.897  Copying: 60/60 [kB] (average 19 MBps) 00:05:31.897 00:05:31.897 13:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:31.897 13:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:31.897 13:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:31.897 13:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:31.897 [2024-10-01 13:33:23.653427] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:31.897 [2024-10-01 13:33:23.654063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59595 ] 00:05:31.897 { 00:05:31.897 "subsystems": [ 00:05:31.897 { 00:05:31.897 "subsystem": "bdev", 00:05:31.897 "config": [ 00:05:31.897 { 00:05:31.897 "params": { 00:05:31.897 "trtype": "pcie", 00:05:31.897 "traddr": "0000:00:10.0", 00:05:31.897 "name": "Nvme0" 00:05:31.897 }, 00:05:31.897 "method": "bdev_nvme_attach_controller" 00:05:31.897 }, 00:05:31.897 { 00:05:31.897 "method": "bdev_wait_for_examine" 00:05:31.897 } 00:05:31.897 ] 00:05:31.897 } 00:05:31.897 ] 00:05:31.897 } 00:05:32.156 [2024-10-01 13:33:23.791452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.156 [2024-10-01 13:33:23.846634] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.156 [2024-10-01 13:33:23.873829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.413  Copying: 60/60 [kB] (average 14 MBps) 00:05:32.413 00:05:32.413 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:32.413 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:32.413 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:32.413 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:32.413 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:32.413 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:32.413 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:32.413 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:32.413 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:32.413 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:32.413 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:32.413 [2024-10-01 13:33:24.177337] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:32.413 [2024-10-01 13:33:24.177431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59611 ] 00:05:32.413 { 00:05:32.413 "subsystems": [ 00:05:32.413 { 00:05:32.413 "subsystem": "bdev", 00:05:32.413 "config": [ 00:05:32.413 { 00:05:32.413 "params": { 00:05:32.413 "trtype": "pcie", 00:05:32.413 "traddr": "0000:00:10.0", 00:05:32.413 "name": "Nvme0" 00:05:32.413 }, 00:05:32.413 "method": "bdev_nvme_attach_controller" 00:05:32.413 }, 00:05:32.413 { 00:05:32.413 "method": "bdev_wait_for_examine" 00:05:32.413 } 00:05:32.413 ] 00:05:32.413 } 00:05:32.413 ] 00:05:32.413 } 00:05:32.671 [2024-10-01 13:33:24.313624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.671 [2024-10-01 13:33:24.369724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.671 [2024-10-01 13:33:24.400972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.928  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:32.928 00:05:32.928 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:32.928 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:32.928 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:32.928 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:32.928 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:32.928 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:32.928 13:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:33.493 13:33:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:33.493 13:33:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:33.493 13:33:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:33.493 13:33:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:33.493 [2024-10-01 13:33:25.322532] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:33.493 [2024-10-01 13:33:25.322648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59630 ] 00:05:33.493 { 00:05:33.493 "subsystems": [ 00:05:33.493 { 00:05:33.493 "subsystem": "bdev", 00:05:33.493 "config": [ 00:05:33.493 { 00:05:33.493 "params": { 00:05:33.493 "trtype": "pcie", 00:05:33.493 "traddr": "0000:00:10.0", 00:05:33.493 "name": "Nvme0" 00:05:33.493 }, 00:05:33.493 "method": "bdev_nvme_attach_controller" 00:05:33.493 }, 00:05:33.493 { 00:05:33.493 "method": "bdev_wait_for_examine" 00:05:33.493 } 00:05:33.493 ] 00:05:33.493 } 00:05:33.493 ] 00:05:33.493 } 00:05:33.752 [2024-10-01 13:33:25.457798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.752 [2024-10-01 13:33:25.528712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.752 [2024-10-01 13:33:25.563499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.011  Copying: 60/60 [kB] (average 58 MBps) 00:05:34.011 00:05:34.011 13:33:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:34.011 13:33:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:34.011 13:33:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:34.011 13:33:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:34.011 [2024-10-01 13:33:25.859396] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:34.011 [2024-10-01 13:33:25.859491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59643 ] 00:05:34.011 { 00:05:34.011 "subsystems": [ 00:05:34.012 { 00:05:34.012 "subsystem": "bdev", 00:05:34.012 "config": [ 00:05:34.012 { 00:05:34.012 "params": { 00:05:34.012 "trtype": "pcie", 00:05:34.012 "traddr": "0000:00:10.0", 00:05:34.012 "name": "Nvme0" 00:05:34.012 }, 00:05:34.012 "method": "bdev_nvme_attach_controller" 00:05:34.012 }, 00:05:34.012 { 00:05:34.012 "method": "bdev_wait_for_examine" 00:05:34.012 } 00:05:34.012 ] 00:05:34.012 } 00:05:34.012 ] 00:05:34.012 } 00:05:34.271 [2024-10-01 13:33:25.993889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.271 [2024-10-01 13:33:26.056112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.271 [2024-10-01 13:33:26.089234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.531  Copying: 60/60 [kB] (average 58 MBps) 00:05:34.531 00:05:34.531 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:34.531 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:34.531 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:34.531 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:34.531 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:34.531 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:34.531 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:34.531 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:34.531 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:34.531 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:34.531 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:34.790 [2024-10-01 13:33:26.408570] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:34.790 [2024-10-01 13:33:26.408677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59659 ] 00:05:34.790 { 00:05:34.790 "subsystems": [ 00:05:34.790 { 00:05:34.790 "subsystem": "bdev", 00:05:34.790 "config": [ 00:05:34.790 { 00:05:34.790 "params": { 00:05:34.790 "trtype": "pcie", 00:05:34.790 "traddr": "0000:00:10.0", 00:05:34.790 "name": "Nvme0" 00:05:34.790 }, 00:05:34.790 "method": "bdev_nvme_attach_controller" 00:05:34.790 }, 00:05:34.790 { 00:05:34.790 "method": "bdev_wait_for_examine" 00:05:34.790 } 00:05:34.790 ] 00:05:34.790 } 00:05:34.790 ] 00:05:34.790 } 00:05:34.790 [2024-10-01 13:33:26.546355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.790 [2024-10-01 13:33:26.608656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.790 [2024-10-01 13:33:26.641427] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.050  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:35.050 00:05:35.050 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:35.050 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:35.050 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:35.050 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:35.050 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:35.050 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:35.050 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:35.050 13:33:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:35.987 13:33:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:35.987 13:33:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:35.987 13:33:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:35.987 13:33:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:35.987 { 00:05:35.987 "subsystems": [ 00:05:35.987 { 00:05:35.987 "subsystem": "bdev", 00:05:35.987 "config": [ 00:05:35.987 { 00:05:35.987 "params": { 00:05:35.987 "trtype": "pcie", 00:05:35.987 "traddr": "0000:00:10.0", 00:05:35.987 "name": "Nvme0" 00:05:35.987 }, 00:05:35.987 "method": "bdev_nvme_attach_controller" 00:05:35.987 }, 00:05:35.987 { 00:05:35.987 "method": "bdev_wait_for_examine" 00:05:35.987 } 00:05:35.987 ] 00:05:35.987 } 00:05:35.987 ] 00:05:35.987 } 00:05:35.987 [2024-10-01 13:33:27.567360] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:35.987 [2024-10-01 13:33:27.567511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59678 ] 00:05:35.987 [2024-10-01 13:33:27.710624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.987 [2024-10-01 13:33:27.771568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.987 [2024-10-01 13:33:27.802306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:36.246  Copying: 56/56 [kB] (average 54 MBps) 00:05:36.246 00:05:36.246 13:33:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:36.246 13:33:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:36.246 13:33:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:36.246 13:33:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:36.506 { 00:05:36.506 "subsystems": [ 00:05:36.506 { 00:05:36.506 "subsystem": "bdev", 00:05:36.506 "config": [ 00:05:36.506 { 00:05:36.506 "params": { 00:05:36.506 "trtype": "pcie", 00:05:36.506 "traddr": "0000:00:10.0", 00:05:36.506 "name": "Nvme0" 00:05:36.506 }, 00:05:36.506 "method": "bdev_nvme_attach_controller" 00:05:36.506 }, 00:05:36.506 { 00:05:36.506 "method": "bdev_wait_for_examine" 00:05:36.506 } 00:05:36.506 ] 00:05:36.506 } 00:05:36.506 ] 00:05:36.506 } 00:05:36.506 [2024-10-01 13:33:28.123847] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:36.506 [2024-10-01 13:33:28.123989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59697 ] 00:05:36.506 [2024-10-01 13:33:28.268590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.506 [2024-10-01 13:33:28.338691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.764 [2024-10-01 13:33:28.372106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:36.764  Copying: 56/56 [kB] (average 27 MBps) 00:05:36.764 00:05:37.024 13:33:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:37.024 13:33:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:37.024 13:33:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:37.024 13:33:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:37.024 13:33:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:37.024 13:33:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:37.024 13:33:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:37.024 13:33:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:37.024 13:33:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:37.024 13:33:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:37.024 13:33:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:37.024 [2024-10-01 13:33:28.683012] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:37.024 [2024-10-01 13:33:28.683492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59707 ] 00:05:37.024 { 00:05:37.024 "subsystems": [ 00:05:37.024 { 00:05:37.024 "subsystem": "bdev", 00:05:37.024 "config": [ 00:05:37.024 { 00:05:37.024 "params": { 00:05:37.024 "trtype": "pcie", 00:05:37.024 "traddr": "0000:00:10.0", 00:05:37.024 "name": "Nvme0" 00:05:37.024 }, 00:05:37.024 "method": "bdev_nvme_attach_controller" 00:05:37.024 }, 00:05:37.024 { 00:05:37.024 "method": "bdev_wait_for_examine" 00:05:37.024 } 00:05:37.024 ] 00:05:37.024 } 00:05:37.024 ] 00:05:37.024 } 00:05:37.024 [2024-10-01 13:33:28.818065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.024 [2024-10-01 13:33:28.877496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.282 [2024-10-01 13:33:28.908100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.540  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:37.540 00:05:37.540 13:33:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:37.540 13:33:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:37.540 13:33:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:37.540 13:33:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:37.540 13:33:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:37.540 13:33:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:37.540 13:33:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.108 13:33:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:38.108 13:33:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:38.108 13:33:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:38.108 13:33:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.108 { 00:05:38.108 "subsystems": [ 00:05:38.108 { 00:05:38.108 "subsystem": "bdev", 00:05:38.108 "config": [ 00:05:38.108 { 00:05:38.108 "params": { 00:05:38.108 "trtype": "pcie", 00:05:38.108 "traddr": "0000:00:10.0", 00:05:38.108 "name": "Nvme0" 00:05:38.108 }, 00:05:38.108 "method": "bdev_nvme_attach_controller" 00:05:38.108 }, 00:05:38.109 { 00:05:38.109 "method": "bdev_wait_for_examine" 00:05:38.109 } 00:05:38.109 ] 00:05:38.109 } 00:05:38.109 ] 00:05:38.109 } 00:05:38.109 [2024-10-01 13:33:29.834527] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:38.109 [2024-10-01 13:33:29.834640] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59726 ] 00:05:38.367 [2024-10-01 13:33:29.973673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.367 [2024-10-01 13:33:30.042214] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.367 [2024-10-01 13:33:30.072317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.627  Copying: 56/56 [kB] (average 54 MBps) 00:05:38.627 00:05:38.627 13:33:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:38.627 13:33:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:38.627 13:33:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:38.627 13:33:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.627 { 00:05:38.627 "subsystems": [ 00:05:38.627 { 00:05:38.627 "subsystem": "bdev", 00:05:38.627 "config": [ 00:05:38.627 { 00:05:38.627 "params": { 00:05:38.627 "trtype": "pcie", 00:05:38.627 "traddr": "0000:00:10.0", 00:05:38.627 "name": "Nvme0" 00:05:38.627 }, 00:05:38.627 "method": "bdev_nvme_attach_controller" 00:05:38.627 }, 00:05:38.627 { 00:05:38.627 "method": "bdev_wait_for_examine" 00:05:38.627 } 00:05:38.627 ] 00:05:38.627 } 00:05:38.627 ] 00:05:38.627 } 00:05:38.627 [2024-10-01 13:33:30.382719] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:38.627 [2024-10-01 13:33:30.382810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59745 ] 00:05:38.885 [2024-10-01 13:33:30.521119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.885 [2024-10-01 13:33:30.580513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.885 [2024-10-01 13:33:30.610477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.156  Copying: 56/56 [kB] (average 54 MBps) 00:05:39.156 00:05:39.156 13:33:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:39.156 13:33:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:39.156 13:33:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:39.156 13:33:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:39.156 13:33:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:39.156 13:33:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:39.156 13:33:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:39.156 13:33:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:39.156 13:33:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:39.156 13:33:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:39.156 13:33:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:39.156 [2024-10-01 13:33:30.924186] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:39.156 [2024-10-01 13:33:30.924288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59760 ] 00:05:39.156 { 00:05:39.156 "subsystems": [ 00:05:39.156 { 00:05:39.156 "subsystem": "bdev", 00:05:39.156 "config": [ 00:05:39.156 { 00:05:39.156 "params": { 00:05:39.156 "trtype": "pcie", 00:05:39.156 "traddr": "0000:00:10.0", 00:05:39.156 "name": "Nvme0" 00:05:39.156 }, 00:05:39.156 "method": "bdev_nvme_attach_controller" 00:05:39.156 }, 00:05:39.156 { 00:05:39.156 "method": "bdev_wait_for_examine" 00:05:39.156 } 00:05:39.156 ] 00:05:39.156 } 00:05:39.156 ] 00:05:39.156 } 00:05:39.442 [2024-10-01 13:33:31.058941] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.442 [2024-10-01 13:33:31.118895] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.442 [2024-10-01 13:33:31.149334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.701  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:39.701 00:05:39.701 13:33:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:39.701 13:33:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:39.701 13:33:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:39.701 13:33:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:39.701 13:33:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:39.701 13:33:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:39.701 13:33:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:39.701 13:33:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.269 13:33:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:40.269 13:33:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:40.269 13:33:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:40.269 13:33:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.269 [2024-10-01 13:33:31.981859] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:40.269 [2024-10-01 13:33:31.981968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59779 ] 00:05:40.269 { 00:05:40.269 "subsystems": [ 00:05:40.269 { 00:05:40.269 "subsystem": "bdev", 00:05:40.269 "config": [ 00:05:40.269 { 00:05:40.269 "params": { 00:05:40.269 "trtype": "pcie", 00:05:40.269 "traddr": "0000:00:10.0", 00:05:40.269 "name": "Nvme0" 00:05:40.269 }, 00:05:40.269 "method": "bdev_nvme_attach_controller" 00:05:40.269 }, 00:05:40.269 { 00:05:40.269 "method": "bdev_wait_for_examine" 00:05:40.269 } 00:05:40.269 ] 00:05:40.269 } 00:05:40.269 ] 00:05:40.269 } 00:05:40.269 [2024-10-01 13:33:32.121739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.527 [2024-10-01 13:33:32.191116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.527 [2024-10-01 13:33:32.224305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.787  Copying: 48/48 [kB] (average 46 MBps) 00:05:40.787 00:05:40.787 13:33:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:40.787 13:33:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:40.787 13:33:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:40.787 13:33:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.787 { 00:05:40.787 "subsystems": [ 00:05:40.787 { 00:05:40.787 "subsystem": "bdev", 00:05:40.787 "config": [ 00:05:40.787 { 00:05:40.787 "params": { 00:05:40.787 "trtype": "pcie", 00:05:40.787 "traddr": "0000:00:10.0", 00:05:40.787 "name": "Nvme0" 00:05:40.787 }, 00:05:40.787 "method": "bdev_nvme_attach_controller" 00:05:40.787 }, 00:05:40.787 { 00:05:40.787 "method": "bdev_wait_for_examine" 00:05:40.787 } 00:05:40.787 ] 00:05:40.787 } 00:05:40.787 ] 00:05:40.787 } 00:05:40.787 [2024-10-01 13:33:32.537165] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:40.787 [2024-10-01 13:33:32.537264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59793 ] 00:05:41.046 [2024-10-01 13:33:32.674338] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.046 [2024-10-01 13:33:32.733386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.046 [2024-10-01 13:33:32.763710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.305  Copying: 48/48 [kB] (average 46 MBps) 00:05:41.305 00:05:41.305 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:41.305 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:41.305 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:41.305 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:41.305 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:41.305 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:41.305 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:41.305 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:41.305 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:41.305 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:41.305 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.305 [2024-10-01 13:33:33.073699] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:41.305 [2024-10-01 13:33:33.073784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59808 ] 00:05:41.305 { 00:05:41.305 "subsystems": [ 00:05:41.305 { 00:05:41.305 "subsystem": "bdev", 00:05:41.305 "config": [ 00:05:41.305 { 00:05:41.305 "params": { 00:05:41.305 "trtype": "pcie", 00:05:41.305 "traddr": "0000:00:10.0", 00:05:41.305 "name": "Nvme0" 00:05:41.305 }, 00:05:41.305 "method": "bdev_nvme_attach_controller" 00:05:41.305 }, 00:05:41.305 { 00:05:41.305 "method": "bdev_wait_for_examine" 00:05:41.305 } 00:05:41.305 ] 00:05:41.305 } 00:05:41.305 ] 00:05:41.305 } 00:05:41.563 [2024-10-01 13:33:33.208147] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.563 [2024-10-01 13:33:33.267559] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.563 [2024-10-01 13:33:33.297564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.822  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:41.822 00:05:41.822 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:41.822 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:41.822 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:41.822 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:41.822 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:41.822 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:41.823 13:33:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.389 13:33:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:42.389 13:33:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:42.389 13:33:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:42.389 13:33:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.389 { 00:05:42.389 "subsystems": [ 00:05:42.389 { 00:05:42.389 "subsystem": "bdev", 00:05:42.389 "config": [ 00:05:42.389 { 00:05:42.389 "params": { 00:05:42.389 "trtype": "pcie", 00:05:42.389 "traddr": "0000:00:10.0", 00:05:42.389 "name": "Nvme0" 00:05:42.389 }, 00:05:42.389 "method": "bdev_nvme_attach_controller" 00:05:42.389 }, 00:05:42.389 { 00:05:42.389 "method": "bdev_wait_for_examine" 00:05:42.389 } 00:05:42.389 ] 00:05:42.389 } 00:05:42.389 ] 00:05:42.389 } 00:05:42.389 [2024-10-01 13:33:34.165670] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:42.389 [2024-10-01 13:33:34.165794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59833 ] 00:05:42.648 [2024-10-01 13:33:34.311454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.648 [2024-10-01 13:33:34.381857] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.648 [2024-10-01 13:33:34.415353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.906  Copying: 48/48 [kB] (average 46 MBps) 00:05:42.906 00:05:42.906 13:33:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:42.906 13:33:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:42.906 13:33:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:42.906 13:33:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.906 [2024-10-01 13:33:34.721263] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:42.906 [2024-10-01 13:33:34.721349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59841 ] 00:05:42.906 { 00:05:42.906 "subsystems": [ 00:05:42.906 { 00:05:42.906 "subsystem": "bdev", 00:05:42.906 "config": [ 00:05:42.906 { 00:05:42.906 "params": { 00:05:42.906 "trtype": "pcie", 00:05:42.906 "traddr": "0000:00:10.0", 00:05:42.906 "name": "Nvme0" 00:05:42.906 }, 00:05:42.906 "method": "bdev_nvme_attach_controller" 00:05:42.906 }, 00:05:42.906 { 00:05:42.906 "method": "bdev_wait_for_examine" 00:05:42.906 } 00:05:42.906 ] 00:05:42.906 } 00:05:42.906 ] 00:05:42.906 } 00:05:43.165 [2024-10-01 13:33:34.857352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.165 [2024-10-01 13:33:34.916632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.165 [2024-10-01 13:33:34.946885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.423  Copying: 48/48 [kB] (average 46 MBps) 00:05:43.424 00:05:43.424 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:43.424 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:43.424 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:43.424 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:43.424 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:43.424 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:43.424 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:43.424 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:43.424 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:43.424 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:43.424 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.424 { 00:05:43.424 "subsystems": [ 00:05:43.424 { 00:05:43.424 "subsystem": "bdev", 00:05:43.424 "config": [ 00:05:43.424 { 00:05:43.424 "params": { 00:05:43.424 "trtype": "pcie", 00:05:43.424 "traddr": "0000:00:10.0", 00:05:43.424 "name": "Nvme0" 00:05:43.424 }, 00:05:43.424 "method": "bdev_nvme_attach_controller" 00:05:43.424 }, 00:05:43.424 { 00:05:43.424 "method": "bdev_wait_for_examine" 00:05:43.424 } 00:05:43.424 ] 00:05:43.424 } 00:05:43.424 ] 00:05:43.424 } 00:05:43.424 [2024-10-01 13:33:35.263994] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:43.424 [2024-10-01 13:33:35.264103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59862 ] 00:05:43.683 [2024-10-01 13:33:35.402424] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.683 [2024-10-01 13:33:35.461777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.683 [2024-10-01 13:33:35.491889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.942  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:43.942 00:05:43.942 00:05:43.942 real 0m13.311s 00:05:43.942 user 0m10.206s 00:05:43.942 sys 0m3.795s 00:05:43.942 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.942 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.942 ************************************ 00:05:43.942 END TEST dd_rw 00:05:43.942 ************************************ 00:05:43.942 13:33:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:43.942 13:33:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.942 13:33:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.942 13:33:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.942 ************************************ 00:05:43.942 START TEST dd_rw_offset 00:05:43.942 ************************************ 00:05:43.942 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:05:43.942 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:43.942 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:43.942 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:05:43.942 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:44.202 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:44.202 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=9id2j36upt9v1xv7xx2kfc00jys26gsgxi9qmwsihkkzr0nt4d38jgtr1fqn6kdbjqty51hzu8y09rbuu656d832k7hfx5znoq31qv97xcvhlv8qlpqf9y987dquq3dp0nhsnnnl35f7os6nistln2psirei2djr2lap3p8wwhpy4gxas6o62pkm4d11y0tw9i52g9ysgfan8s9ecc91je4rf1icc1vdd43cuo61jqjznr8nqdxkgww4mat5jhob91tmx4h0u3ckuy0bj256hzgz2qhbuig9rhns52h0adtxztuxybic6mxfy4dov3m0oizxrdvv116774qpwgnie4a03bfk0xq43z67gwklndw8n6d8xuz6xt8igig08xrjd4rvfl4d857954f1tsll8tky1invegndztr6pax9zi7dwmu3dprz1caltpf4bee58r2nh30t2ocptoetc0lcpwm9s654sv92j6k2woyaa9dtekm705xfelxr5fdvit5xz2kqlj1bkrxmseqa7fxgrhy0k29bzir2z751a5tvxycwz3b9l059ggplgy3x1pgv17646spj4rf1h491pgxmwu58xprq7g6slu8gybpf8qss5cxiunvcn224o0f2s5yqchpzm6op75oxc7r7epx6p85j31vp7apc744qwno7wm574env3fozxv2pog999x92fw2jr127e06jyjjypxgjnswybeu725kt9twu099o5ekwwa4b630jb92dtljjbxnzxxvzbykeqc8s6gtcmhztboou86iy1mco5jqz78ku4n1flmzrujesyt5yzzap3rkzh9o5o7lj9ctavlwhdr8z8dpktvszr0e2ca15rgu1zpd86kem9qmedxpfuprq4cc9zxnne97wjfjtnvpxzy3su8isjo61k56o68s73sbm5ciga9uhozkwzluw374mjh1l1xdb338se2e3vu6494wx0obk2nkcm1h1mjzu059hgoqndig96g3pdp1jandhrqpc4ogum5lbxg5evmu2ewb2pwusdak338n5e6u4e5lm2ebfe6rb0iblllcd9usueukr620s4b5qdylx2tglhz1dz8xfm80vxxnb9jvqc9des5uc24jrtng7o3fq01q9gupoiraeafq5hvbpys5kojg7zgn4fu4aoi7r5yag7nd3eflwpbm7hmmkvu5jstt1y943xdsc7w3x1wy8ntcckfidr5tkdp1dlgr2jg4btaty0y62lgenixhcllvom7cdiwmmj84msglkueryvw6ost2pgjwt31gigstq46c0ykkoz7k3nitih3tiofr0eqbplmdohdln531ivziasvz8phk62nhv4z2vt1fq5gkgwfti9vi4m24v4lfacrmbbll4uzbobfwc1yl5oeqy5j46byj26eb554ynwrkvp0w9m43s1vycyyfr4iyx5ntuu108g27frl033s5dobj5194k850f6pe01io1s58g7y52s5gf50gddvzoh4iz28dd6c7rneszz530gexu693xb37u4xkj93a7gvtas9agicvivv2yws3w6wqy080dw1z9nq1bh4srynff3jextvs0iw1u85d8im6b3bwmggdb06wvyfqc6a8tgtk52tc30kdmyhpdzj3ukucwlw1lh7ey1l5kin6qxaprdtiosycqc76it42x95lgf346pogf4ndmy16b20peodiaqgqbaxqoggwp1ozykf8r87wna48vx5sb3ntk1fvyumiuut409hxq43xxog3obexa64urj945a4xu01qeizb30goixb3eqy323a6pvufth9baw1pm2hktj086vgn6inmkr399c7wk0vkhch4616qj19w3q0mhgbunny143pv2eakgmnmga140d61b1x2ee6tw3j47nv0dglqwo56moaiglwpgejwztr5v8dxl41x3sc58ru590d49kdb9kf8t1lxq09z5zl124zvtjdlzii5ixljiybkwfuspd1pvieouceicrkqmhtmq2qy8lhyg4cgia4uf4i6w252rmegw0nkv5h18eyvpbj1xri78b353rmbwf9856twu1aiwnr9aosox8cf4e307aiair5sitneifcs9cduce6if1ez23xwni82sjmuafcv4os7wezo961aw5llvnehzrz56lppajo8l1hz4cujpcbfmcgvioyf8dszbiirsmuf4zjfckz86y5a52pxc7fqvwad57qw1ip5uvi4f43h3k5z8pc9ld90a57sl2n8wumzsp34snbk3czwvoj94b2q5687nevjtxwjp87gszvv6qrsaisvrs300eedqt9cqjukuxkrh3ytcn95bnsokt1ugdcxi35e3rbqd75l1n48oect5158ky52avdyl2xj1v0086exzf83xl0ln3dnopco0qykgv96colcawyjdfeb60t5e0dfnfdlgcmvoqvrhogkwpymyoo6nyzpdkxdd3xqpp8n19zse6whtwx2rvql445iuh939fziq4hdt00absjem7e0bxdkv8j3wykt67wr4186f0hv8r7us6euv5bb9vb0ho4vuif7twshqawabcdu67r5m17lu2kitot5v38r3ldb48wlfto2zrabcepzqn0p2ockt2479z1053xm1bsmb38ecemhcr9macngonah6xg81jxnkb0vcau18uevc0svgbtesnlm3vxtr0umftthjpzvjrsj8br8b6xgmtbakameuguxk5epteu1bemmie8uf36af6rqjzxr0ii1cszb9dnxrsujv3u11gd26h4oxfbbi8fpy3mbvdexso66gwvfugpq29797gmvzd4uwdt8sp0zsvoh6hkd89ujdcot8t1sf3cnjaj6r3974q53icsoplhqk64jjssa3wu2pcvp0c1uqscxs4ge80p0bzqc2md6z4vd9r6h1bw5m5whulqz6yec68ngg3dx6e9pwzchhvwjycpdycg4256prqucyhcjcimcc3v0nn7xbbh16axn7cebtp1k5ij6fkxl0imd0wv1awa9co53lq9h25zz39pbciybiwub5964q1dgsbmcx9v8euqimen3qp4bvobrtixs9fafwkizqm0ieykt44cwnba5y3a8xl9usrl04zt22rmjwx8j12nmxynmo6q7xywmzmr82zzdlaiwtuzn65kwlb259tv7cfvongvipgt7rs74ysbcy70i7r2p56zexso8uk1gcu1ctpqiboeyrc73gifprbwlfesj74sqwtsfh60a1mqpnf5z0g7s2xoxknew6qz5pi4pbq7iwx4f68ennr3ecr8vi9c2v1rsiqxiatxlgg7w8yrgbdni6myvgbp6ikbkhflel3vskqvjzr73q8p7ah2slc0m47wssiacajza6rkllrhplqsits5zrul6h88nog8718dsnemi00x7q7edxevgwyognrwnf5eyfnsxr47unpbvmyjtbu07v0i9x7cvk17yweha4ew65nwmr7sqchedqob8yi3me7e6exr9ufb6bhtq6caaykn4u7usa1s4rt89j9qjbunw22jsrntgi7g5uujzz93aq8m2caie5l76ycnz54ae2vl8lci3f3qw7clhyi75nz21xl82zbbeh5vb8opfub5p9mv8t5tz3f78fwgyc6tfgnb1t51yuwqe9ro1td5rgb9l6ladvheg96m3tl03i05d3fuy6rrp0rpsqolb7zwoylj22emg6h03ihs836kl7i00g3xof268cr0h3o010qefwho4t7k4a7hub6ha0bqy70lis2x4vvbemumful2emrf0p12uvbhwlj0dtvqws92l6k4eemxsr7vfm6kaqpj0us9c1x6ulfe7d5pfk8aocph5hjrczfdiuga0jou9hq7ap6jhc7ix85qemylwmqvsvdlr1royzxbjv3qj8hblt8qv0vweamvjxcbqwk2ieulx5dhe26kv59j1in92s8cue2u4nmnu946qn6f8189tqa2xbqt0vfngkfy7s1i4afii9c46w936ljnikd0szzqsubeu5hxni199t 00:05:44.202 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:44.202 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:05:44.202 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:44.202 13:33:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:44.202 [2024-10-01 13:33:35.904125] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:44.202 [2024-10-01 13:33:35.904229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59887 ] 00:05:44.202 { 00:05:44.202 "subsystems": [ 00:05:44.202 { 00:05:44.202 "subsystem": "bdev", 00:05:44.202 "config": [ 00:05:44.202 { 00:05:44.202 "params": { 00:05:44.202 "trtype": "pcie", 00:05:44.202 "traddr": "0000:00:10.0", 00:05:44.202 "name": "Nvme0" 00:05:44.202 }, 00:05:44.202 "method": "bdev_nvme_attach_controller" 00:05:44.202 }, 00:05:44.202 { 00:05:44.202 "method": "bdev_wait_for_examine" 00:05:44.202 } 00:05:44.202 ] 00:05:44.202 } 00:05:44.202 ] 00:05:44.202 } 00:05:44.202 [2024-10-01 13:33:36.042416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.461 [2024-10-01 13:33:36.101900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.461 [2024-10-01 13:33:36.131962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.720  Copying: 4096/4096 [B] (average 4000 kBps) 00:05:44.720 00:05:44.720 13:33:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:44.720 13:33:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:05:44.720 13:33:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:44.720 13:33:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:44.720 [2024-10-01 13:33:36.443771] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:44.720 [2024-10-01 13:33:36.443878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59906 ] 00:05:44.720 { 00:05:44.720 "subsystems": [ 00:05:44.720 { 00:05:44.720 "subsystem": "bdev", 00:05:44.720 "config": [ 00:05:44.720 { 00:05:44.720 "params": { 00:05:44.720 "trtype": "pcie", 00:05:44.720 "traddr": "0000:00:10.0", 00:05:44.720 "name": "Nvme0" 00:05:44.720 }, 00:05:44.720 "method": "bdev_nvme_attach_controller" 00:05:44.720 }, 00:05:44.720 { 00:05:44.720 "method": "bdev_wait_for_examine" 00:05:44.720 } 00:05:44.720 ] 00:05:44.720 } 00:05:44.720 ] 00:05:44.720 } 00:05:44.720 [2024-10-01 13:33:36.579489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.979 [2024-10-01 13:33:36.630556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.979 [2024-10-01 13:33:36.658180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.238  Copying: 4096/4096 [B] (average 4000 kBps) 00:05:45.238 00:05:45.238 13:33:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:05:45.239 13:33:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 9id2j36upt9v1xv7xx2kfc00jys26gsgxi9qmwsihkkzr0nt4d38jgtr1fqn6kdbjqty51hzu8y09rbuu656d832k7hfx5znoq31qv97xcvhlv8qlpqf9y987dquq3dp0nhsnnnl35f7os6nistln2psirei2djr2lap3p8wwhpy4gxas6o62pkm4d11y0tw9i52g9ysgfan8s9ecc91je4rf1icc1vdd43cuo61jqjznr8nqdxkgww4mat5jhob91tmx4h0u3ckuy0bj256hzgz2qhbuig9rhns52h0adtxztuxybic6mxfy4dov3m0oizxrdvv116774qpwgnie4a03bfk0xq43z67gwklndw8n6d8xuz6xt8igig08xrjd4rvfl4d857954f1tsll8tky1invegndztr6pax9zi7dwmu3dprz1caltpf4bee58r2nh30t2ocptoetc0lcpwm9s654sv92j6k2woyaa9dtekm705xfelxr5fdvit5xz2kqlj1bkrxmseqa7fxgrhy0k29bzir2z751a5tvxycwz3b9l059ggplgy3x1pgv17646spj4rf1h491pgxmwu58xprq7g6slu8gybpf8qss5cxiunvcn224o0f2s5yqchpzm6op75oxc7r7epx6p85j31vp7apc744qwno7wm574env3fozxv2pog999x92fw2jr127e06jyjjypxgjnswybeu725kt9twu099o5ekwwa4b630jb92dtljjbxnzxxvzbykeqc8s6gtcmhztboou86iy1mco5jqz78ku4n1flmzrujesyt5yzzap3rkzh9o5o7lj9ctavlwhdr8z8dpktvszr0e2ca15rgu1zpd86kem9qmedxpfuprq4cc9zxnne97wjfjtnvpxzy3su8isjo61k56o68s73sbm5ciga9uhozkwzluw374mjh1l1xdb338se2e3vu6494wx0obk2nkcm1h1mjzu059hgoqndig96g3pdp1jandhrqpc4ogum5lbxg5evmu2ewb2pwusdak338n5e6u4e5lm2ebfe6rb0iblllcd9usueukr620s4b5qdylx2tglhz1dz8xfm80vxxnb9jvqc9des5uc24jrtng7o3fq01q9gupoiraeafq5hvbpys5kojg7zgn4fu4aoi7r5yag7nd3eflwpbm7hmmkvu5jstt1y943xdsc7w3x1wy8ntcckfidr5tkdp1dlgr2jg4btaty0y62lgenixhcllvom7cdiwmmj84msglkueryvw6ost2pgjwt31gigstq46c0ykkoz7k3nitih3tiofr0eqbplmdohdln531ivziasvz8phk62nhv4z2vt1fq5gkgwfti9vi4m24v4lfacrmbbll4uzbobfwc1yl5oeqy5j46byj26eb554ynwrkvp0w9m43s1vycyyfr4iyx5ntuu108g27frl033s5dobj5194k850f6pe01io1s58g7y52s5gf50gddvzoh4iz28dd6c7rneszz530gexu693xb37u4xkj93a7gvtas9agicvivv2yws3w6wqy080dw1z9nq1bh4srynff3jextvs0iw1u85d8im6b3bwmggdb06wvyfqc6a8tgtk52tc30kdmyhpdzj3ukucwlw1lh7ey1l5kin6qxaprdtiosycqc76it42x95lgf346pogf4ndmy16b20peodiaqgqbaxqoggwp1ozykf8r87wna48vx5sb3ntk1fvyumiuut409hxq43xxog3obexa64urj945a4xu01qeizb30goixb3eqy323a6pvufth9baw1pm2hktj086vgn6inmkr399c7wk0vkhch4616qj19w3q0mhgbunny143pv2eakgmnmga140d61b1x2ee6tw3j47nv0dglqwo56moaiglwpgejwztr5v8dxl41x3sc58ru590d49kdb9kf8t1lxq09z5zl124zvtjdlzii5ixljiybkwfuspd1pvieouceicrkqmhtmq2qy8lhyg4cgia4uf4i6w252rmegw0nkv5h18eyvpbj1xri78b353rmbwf9856twu1aiwnr9aosox8cf4e307aiair5sitneifcs9cduce6if1ez23xwni82sjmuafcv4os7wezo961aw5llvnehzrz56lppajo8l1hz4cujpcbfmcgvioyf8dszbiirsmuf4zjfckz86y5a52pxc7fqvwad57qw1ip5uvi4f43h3k5z8pc9ld90a57sl2n8wumzsp34snbk3czwvoj94b2q5687nevjtxwjp87gszvv6qrsaisvrs300eedqt9cqjukuxkrh3ytcn95bnsokt1ugdcxi35e3rbqd75l1n48oect5158ky52avdyl2xj1v0086exzf83xl0ln3dnopco0qykgv96colcawyjdfeb60t5e0dfnfdlgcmvoqvrhogkwpymyoo6nyzpdkxdd3xqpp8n19zse6whtwx2rvql445iuh939fziq4hdt00absjem7e0bxdkv8j3wykt67wr4186f0hv8r7us6euv5bb9vb0ho4vuif7twshqawabcdu67r5m17lu2kitot5v38r3ldb48wlfto2zrabcepzqn0p2ockt2479z1053xm1bsmb38ecemhcr9macngonah6xg81jxnkb0vcau18uevc0svgbtesnlm3vxtr0umftthjpzvjrsj8br8b6xgmtbakameuguxk5epteu1bemmie8uf36af6rqjzxr0ii1cszb9dnxrsujv3u11gd26h4oxfbbi8fpy3mbvdexso66gwvfugpq29797gmvzd4uwdt8sp0zsvoh6hkd89ujdcot8t1sf3cnjaj6r3974q53icsoplhqk64jjssa3wu2pcvp0c1uqscxs4ge80p0bzqc2md6z4vd9r6h1bw5m5whulqz6yec68ngg3dx6e9pwzchhvwjycpdycg4256prqucyhcjcimcc3v0nn7xbbh16axn7cebtp1k5ij6fkxl0imd0wv1awa9co53lq9h25zz39pbciybiwub5964q1dgsbmcx9v8euqimen3qp4bvobrtixs9fafwkizqm0ieykt44cwnba5y3a8xl9usrl04zt22rmjwx8j12nmxynmo6q7xywmzmr82zzdlaiwtuzn65kwlb259tv7cfvongvipgt7rs74ysbcy70i7r2p56zexso8uk1gcu1ctpqiboeyrc73gifprbwlfesj74sqwtsfh60a1mqpnf5z0g7s2xoxknew6qz5pi4pbq7iwx4f68ennr3ecr8vi9c2v1rsiqxiatxlgg7w8yrgbdni6myvgbp6ikbkhflel3vskqvjzr73q8p7ah2slc0m47wssiacajza6rkllrhplqsits5zrul6h88nog8718dsnemi00x7q7edxevgwyognrwnf5eyfnsxr47unpbvmyjtbu07v0i9x7cvk17yweha4ew65nwmr7sqchedqob8yi3me7e6exr9ufb6bhtq6caaykn4u7usa1s4rt89j9qjbunw22jsrntgi7g5uujzz93aq8m2caie5l76ycnz54ae2vl8lci3f3qw7clhyi75nz21xl82zbbeh5vb8opfub5p9mv8t5tz3f78fwgyc6tfgnb1t51yuwqe9ro1td5rgb9l6ladvheg96m3tl03i05d3fuy6rrp0rpsqolb7zwoylj22emg6h03ihs836kl7i00g3xof268cr0h3o010qefwho4t7k4a7hub6ha0bqy70lis2x4vvbemumful2emrf0p12uvbhwlj0dtvqws92l6k4eemxsr7vfm6kaqpj0us9c1x6ulfe7d5pfk8aocph5hjrczfdiuga0jou9hq7ap6jhc7ix85qemylwmqvsvdlr1royzxbjv3qj8hblt8qv0vweamvjxcbqwk2ieulx5dhe26kv59j1in92s8cue2u4nmnu946qn6f8189tqa2xbqt0vfngkfy7s1i4afii9c46w936ljnikd0szzqsubeu5hxni199t == \9\i\d\2\j\3\6\u\p\t\9\v\1\x\v\7\x\x\2\k\f\c\0\0\j\y\s\2\6\g\s\g\x\i\9\q\m\w\s\i\h\k\k\z\r\0\n\t\4\d\3\8\j\g\t\r\1\f\q\n\6\k\d\b\j\q\t\y\5\1\h\z\u\8\y\0\9\r\b\u\u\6\5\6\d\8\3\2\k\7\h\f\x\5\z\n\o\q\3\1\q\v\9\7\x\c\v\h\l\v\8\q\l\p\q\f\9\y\9\8\7\d\q\u\q\3\d\p\0\n\h\s\n\n\n\l\3\5\f\7\o\s\6\n\i\s\t\l\n\2\p\s\i\r\e\i\2\d\j\r\2\l\a\p\3\p\8\w\w\h\p\y\4\g\x\a\s\6\o\6\2\p\k\m\4\d\1\1\y\0\t\w\9\i\5\2\g\9\y\s\g\f\a\n\8\s\9\e\c\c\9\1\j\e\4\r\f\1\i\c\c\1\v\d\d\4\3\c\u\o\6\1\j\q\j\z\n\r\8\n\q\d\x\k\g\w\w\4\m\a\t\5\j\h\o\b\9\1\t\m\x\4\h\0\u\3\c\k\u\y\0\b\j\2\5\6\h\z\g\z\2\q\h\b\u\i\g\9\r\h\n\s\5\2\h\0\a\d\t\x\z\t\u\x\y\b\i\c\6\m\x\f\y\4\d\o\v\3\m\0\o\i\z\x\r\d\v\v\1\1\6\7\7\4\q\p\w\g\n\i\e\4\a\0\3\b\f\k\0\x\q\4\3\z\6\7\g\w\k\l\n\d\w\8\n\6\d\8\x\u\z\6\x\t\8\i\g\i\g\0\8\x\r\j\d\4\r\v\f\l\4\d\8\5\7\9\5\4\f\1\t\s\l\l\8\t\k\y\1\i\n\v\e\g\n\d\z\t\r\6\p\a\x\9\z\i\7\d\w\m\u\3\d\p\r\z\1\c\a\l\t\p\f\4\b\e\e\5\8\r\2\n\h\3\0\t\2\o\c\p\t\o\e\t\c\0\l\c\p\w\m\9\s\6\5\4\s\v\9\2\j\6\k\2\w\o\y\a\a\9\d\t\e\k\m\7\0\5\x\f\e\l\x\r\5\f\d\v\i\t\5\x\z\2\k\q\l\j\1\b\k\r\x\m\s\e\q\a\7\f\x\g\r\h\y\0\k\2\9\b\z\i\r\2\z\7\5\1\a\5\t\v\x\y\c\w\z\3\b\9\l\0\5\9\g\g\p\l\g\y\3\x\1\p\g\v\1\7\6\4\6\s\p\j\4\r\f\1\h\4\9\1\p\g\x\m\w\u\5\8\x\p\r\q\7\g\6\s\l\u\8\g\y\b\p\f\8\q\s\s\5\c\x\i\u\n\v\c\n\2\2\4\o\0\f\2\s\5\y\q\c\h\p\z\m\6\o\p\7\5\o\x\c\7\r\7\e\p\x\6\p\8\5\j\3\1\v\p\7\a\p\c\7\4\4\q\w\n\o\7\w\m\5\7\4\e\n\v\3\f\o\z\x\v\2\p\o\g\9\9\9\x\9\2\f\w\2\j\r\1\2\7\e\0\6\j\y\j\j\y\p\x\g\j\n\s\w\y\b\e\u\7\2\5\k\t\9\t\w\u\0\9\9\o\5\e\k\w\w\a\4\b\6\3\0\j\b\9\2\d\t\l\j\j\b\x\n\z\x\x\v\z\b\y\k\e\q\c\8\s\6\g\t\c\m\h\z\t\b\o\o\u\8\6\i\y\1\m\c\o\5\j\q\z\7\8\k\u\4\n\1\f\l\m\z\r\u\j\e\s\y\t\5\y\z\z\a\p\3\r\k\z\h\9\o\5\o\7\l\j\9\c\t\a\v\l\w\h\d\r\8\z\8\d\p\k\t\v\s\z\r\0\e\2\c\a\1\5\r\g\u\1\z\p\d\8\6\k\e\m\9\q\m\e\d\x\p\f\u\p\r\q\4\c\c\9\z\x\n\n\e\9\7\w\j\f\j\t\n\v\p\x\z\y\3\s\u\8\i\s\j\o\6\1\k\5\6\o\6\8\s\7\3\s\b\m\5\c\i\g\a\9\u\h\o\z\k\w\z\l\u\w\3\7\4\m\j\h\1\l\1\x\d\b\3\3\8\s\e\2\e\3\v\u\6\4\9\4\w\x\0\o\b\k\2\n\k\c\m\1\h\1\m\j\z\u\0\5\9\h\g\o\q\n\d\i\g\9\6\g\3\p\d\p\1\j\a\n\d\h\r\q\p\c\4\o\g\u\m\5\l\b\x\g\5\e\v\m\u\2\e\w\b\2\p\w\u\s\d\a\k\3\3\8\n\5\e\6\u\4\e\5\l\m\2\e\b\f\e\6\r\b\0\i\b\l\l\l\c\d\9\u\s\u\e\u\k\r\6\2\0\s\4\b\5\q\d\y\l\x\2\t\g\l\h\z\1\d\z\8\x\f\m\8\0\v\x\x\n\b\9\j\v\q\c\9\d\e\s\5\u\c\2\4\j\r\t\n\g\7\o\3\f\q\0\1\q\9\g\u\p\o\i\r\a\e\a\f\q\5\h\v\b\p\y\s\5\k\o\j\g\7\z\g\n\4\f\u\4\a\o\i\7\r\5\y\a\g\7\n\d\3\e\f\l\w\p\b\m\7\h\m\m\k\v\u\5\j\s\t\t\1\y\9\4\3\x\d\s\c\7\w\3\x\1\w\y\8\n\t\c\c\k\f\i\d\r\5\t\k\d\p\1\d\l\g\r\2\j\g\4\b\t\a\t\y\0\y\6\2\l\g\e\n\i\x\h\c\l\l\v\o\m\7\c\d\i\w\m\m\j\8\4\m\s\g\l\k\u\e\r\y\v\w\6\o\s\t\2\p\g\j\w\t\3\1\g\i\g\s\t\q\4\6\c\0\y\k\k\o\z\7\k\3\n\i\t\i\h\3\t\i\o\f\r\0\e\q\b\p\l\m\d\o\h\d\l\n\5\3\1\i\v\z\i\a\s\v\z\8\p\h\k\6\2\n\h\v\4\z\2\v\t\1\f\q\5\g\k\g\w\f\t\i\9\v\i\4\m\2\4\v\4\l\f\a\c\r\m\b\b\l\l\4\u\z\b\o\b\f\w\c\1\y\l\5\o\e\q\y\5\j\4\6\b\y\j\2\6\e\b\5\5\4\y\n\w\r\k\v\p\0\w\9\m\4\3\s\1\v\y\c\y\y\f\r\4\i\y\x\5\n\t\u\u\1\0\8\g\2\7\f\r\l\0\3\3\s\5\d\o\b\j\5\1\9\4\k\8\5\0\f\6\p\e\0\1\i\o\1\s\5\8\g\7\y\5\2\s\5\g\f\5\0\g\d\d\v\z\o\h\4\i\z\2\8\d\d\6\c\7\r\n\e\s\z\z\5\3\0\g\e\x\u\6\9\3\x\b\3\7\u\4\x\k\j\9\3\a\7\g\v\t\a\s\9\a\g\i\c\v\i\v\v\2\y\w\s\3\w\6\w\q\y\0\8\0\d\w\1\z\9\n\q\1\b\h\4\s\r\y\n\f\f\3\j\e\x\t\v\s\0\i\w\1\u\8\5\d\8\i\m\6\b\3\b\w\m\g\g\d\b\0\6\w\v\y\f\q\c\6\a\8\t\g\t\k\5\2\t\c\3\0\k\d\m\y\h\p\d\z\j\3\u\k\u\c\w\l\w\1\l\h\7\e\y\1\l\5\k\i\n\6\q\x\a\p\r\d\t\i\o\s\y\c\q\c\7\6\i\t\4\2\x\9\5\l\g\f\3\4\6\p\o\g\f\4\n\d\m\y\1\6\b\2\0\p\e\o\d\i\a\q\g\q\b\a\x\q\o\g\g\w\p\1\o\z\y\k\f\8\r\8\7\w\n\a\4\8\v\x\5\s\b\3\n\t\k\1\f\v\y\u\m\i\u\u\t\4\0\9\h\x\q\4\3\x\x\o\g\3\o\b\e\x\a\6\4\u\r\j\9\4\5\a\4\x\u\0\1\q\e\i\z\b\3\0\g\o\i\x\b\3\e\q\y\3\2\3\a\6\p\v\u\f\t\h\9\b\a\w\1\p\m\2\h\k\t\j\0\8\6\v\g\n\6\i\n\m\k\r\3\9\9\c\7\w\k\0\v\k\h\c\h\4\6\1\6\q\j\1\9\w\3\q\0\m\h\g\b\u\n\n\y\1\4\3\p\v\2\e\a\k\g\m\n\m\g\a\1\4\0\d\6\1\b\1\x\2\e\e\6\t\w\3\j\4\7\n\v\0\d\g\l\q\w\o\5\6\m\o\a\i\g\l\w\p\g\e\j\w\z\t\r\5\v\8\d\x\l\4\1\x\3\s\c\5\8\r\u\5\9\0\d\4\9\k\d\b\9\k\f\8\t\1\l\x\q\0\9\z\5\z\l\1\2\4\z\v\t\j\d\l\z\i\i\5\i\x\l\j\i\y\b\k\w\f\u\s\p\d\1\p\v\i\e\o\u\c\e\i\c\r\k\q\m\h\t\m\q\2\q\y\8\l\h\y\g\4\c\g\i\a\4\u\f\4\i\6\w\2\5\2\r\m\e\g\w\0\n\k\v\5\h\1\8\e\y\v\p\b\j\1\x\r\i\7\8\b\3\5\3\r\m\b\w\f\9\8\5\6\t\w\u\1\a\i\w\n\r\9\a\o\s\o\x\8\c\f\4\e\3\0\7\a\i\a\i\r\5\s\i\t\n\e\i\f\c\s\9\c\d\u\c\e\6\i\f\1\e\z\2\3\x\w\n\i\8\2\s\j\m\u\a\f\c\v\4\o\s\7\w\e\z\o\9\6\1\a\w\5\l\l\v\n\e\h\z\r\z\5\6\l\p\p\a\j\o\8\l\1\h\z\4\c\u\j\p\c\b\f\m\c\g\v\i\o\y\f\8\d\s\z\b\i\i\r\s\m\u\f\4\z\j\f\c\k\z\8\6\y\5\a\5\2\p\x\c\7\f\q\v\w\a\d\5\7\q\w\1\i\p\5\u\v\i\4\f\4\3\h\3\k\5\z\8\p\c\9\l\d\9\0\a\5\7\s\l\2\n\8\w\u\m\z\s\p\3\4\s\n\b\k\3\c\z\w\v\o\j\9\4\b\2\q\5\6\8\7\n\e\v\j\t\x\w\j\p\8\7\g\s\z\v\v\6\q\r\s\a\i\s\v\r\s\3\0\0\e\e\d\q\t\9\c\q\j\u\k\u\x\k\r\h\3\y\t\c\n\9\5\b\n\s\o\k\t\1\u\g\d\c\x\i\3\5\e\3\r\b\q\d\7\5\l\1\n\4\8\o\e\c\t\5\1\5\8\k\y\5\2\a\v\d\y\l\2\x\j\1\v\0\0\8\6\e\x\z\f\8\3\x\l\0\l\n\3\d\n\o\p\c\o\0\q\y\k\g\v\9\6\c\o\l\c\a\w\y\j\d\f\e\b\6\0\t\5\e\0\d\f\n\f\d\l\g\c\m\v\o\q\v\r\h\o\g\k\w\p\y\m\y\o\o\6\n\y\z\p\d\k\x\d\d\3\x\q\p\p\8\n\1\9\z\s\e\6\w\h\t\w\x\2\r\v\q\l\4\4\5\i\u\h\9\3\9\f\z\i\q\4\h\d\t\0\0\a\b\s\j\e\m\7\e\0\b\x\d\k\v\8\j\3\w\y\k\t\6\7\w\r\4\1\8\6\f\0\h\v\8\r\7\u\s\6\e\u\v\5\b\b\9\v\b\0\h\o\4\v\u\i\f\7\t\w\s\h\q\a\w\a\b\c\d\u\6\7\r\5\m\1\7\l\u\2\k\i\t\o\t\5\v\3\8\r\3\l\d\b\4\8\w\l\f\t\o\2\z\r\a\b\c\e\p\z\q\n\0\p\2\o\c\k\t\2\4\7\9\z\1\0\5\3\x\m\1\b\s\m\b\3\8\e\c\e\m\h\c\r\9\m\a\c\n\g\o\n\a\h\6\x\g\8\1\j\x\n\k\b\0\v\c\a\u\1\8\u\e\v\c\0\s\v\g\b\t\e\s\n\l\m\3\v\x\t\r\0\u\m\f\t\t\h\j\p\z\v\j\r\s\j\8\b\r\8\b\6\x\g\m\t\b\a\k\a\m\e\u\g\u\x\k\5\e\p\t\e\u\1\b\e\m\m\i\e\8\u\f\3\6\a\f\6\r\q\j\z\x\r\0\i\i\1\c\s\z\b\9\d\n\x\r\s\u\j\v\3\u\1\1\g\d\2\6\h\4\o\x\f\b\b\i\8\f\p\y\3\m\b\v\d\e\x\s\o\6\6\g\w\v\f\u\g\p\q\2\9\7\9\7\g\m\v\z\d\4\u\w\d\t\8\s\p\0\z\s\v\o\h\6\h\k\d\8\9\u\j\d\c\o\t\8\t\1\s\f\3\c\n\j\a\j\6\r\3\9\7\4\q\5\3\i\c\s\o\p\l\h\q\k\6\4\j\j\s\s\a\3\w\u\2\p\c\v\p\0\c\1\u\q\s\c\x\s\4\g\e\8\0\p\0\b\z\q\c\2\m\d\6\z\4\v\d\9\r\6\h\1\b\w\5\m\5\w\h\u\l\q\z\6\y\e\c\6\8\n\g\g\3\d\x\6\e\9\p\w\z\c\h\h\v\w\j\y\c\p\d\y\c\g\4\2\5\6\p\r\q\u\c\y\h\c\j\c\i\m\c\c\3\v\0\n\n\7\x\b\b\h\1\6\a\x\n\7\c\e\b\t\p\1\k\5\i\j\6\f\k\x\l\0\i\m\d\0\w\v\1\a\w\a\9\c\o\5\3\l\q\9\h\2\5\z\z\3\9\p\b\c\i\y\b\i\w\u\b\5\9\6\4\q\1\d\g\s\b\m\c\x\9\v\8\e\u\q\i\m\e\n\3\q\p\4\b\v\o\b\r\t\i\x\s\9\f\a\f\w\k\i\z\q\m\0\i\e\y\k\t\4\4\c\w\n\b\a\5\y\3\a\8\x\l\9\u\s\r\l\0\4\z\t\2\2\r\m\j\w\x\8\j\1\2\n\m\x\y\n\m\o\6\q\7\x\y\w\m\z\m\r\8\2\z\z\d\l\a\i\w\t\u\z\n\6\5\k\w\l\b\2\5\9\t\v\7\c\f\v\o\n\g\v\i\p\g\t\7\r\s\7\4\y\s\b\c\y\7\0\i\7\r\2\p\5\6\z\e\x\s\o\8\u\k\1\g\c\u\1\c\t\p\q\i\b\o\e\y\r\c\7\3\g\i\f\p\r\b\w\l\f\e\s\j\7\4\s\q\w\t\s\f\h\6\0\a\1\m\q\p\n\f\5\z\0\g\7\s\2\x\o\x\k\n\e\w\6\q\z\5\p\i\4\p\b\q\7\i\w\x\4\f\6\8\e\n\n\r\3\e\c\r\8\v\i\9\c\2\v\1\r\s\i\q\x\i\a\t\x\l\g\g\7\w\8\y\r\g\b\d\n\i\6\m\y\v\g\b\p\6\i\k\b\k\h\f\l\e\l\3\v\s\k\q\v\j\z\r\7\3\q\8\p\7\a\h\2\s\l\c\0\m\4\7\w\s\s\i\a\c\a\j\z\a\6\r\k\l\l\r\h\p\l\q\s\i\t\s\5\z\r\u\l\6\h\8\8\n\o\g\8\7\1\8\d\s\n\e\m\i\0\0\x\7\q\7\e\d\x\e\v\g\w\y\o\g\n\r\w\n\f\5\e\y\f\n\s\x\r\4\7\u\n\p\b\v\m\y\j\t\b\u\0\7\v\0\i\9\x\7\c\v\k\1\7\y\w\e\h\a\4\e\w\6\5\n\w\m\r\7\s\q\c\h\e\d\q\o\b\8\y\i\3\m\e\7\e\6\e\x\r\9\u\f\b\6\b\h\t\q\6\c\a\a\y\k\n\4\u\7\u\s\a\1\s\4\r\t\8\9\j\9\q\j\b\u\n\w\2\2\j\s\r\n\t\g\i\7\g\5\u\u\j\z\z\9\3\a\q\8\m\2\c\a\i\e\5\l\7\6\y\c\n\z\5\4\a\e\2\v\l\8\l\c\i\3\f\3\q\w\7\c\l\h\y\i\7\5\n\z\2\1\x\l\8\2\z\b\b\e\h\5\v\b\8\o\p\f\u\b\5\p\9\m\v\8\t\5\t\z\3\f\7\8\f\w\g\y\c\6\t\f\g\n\b\1\t\5\1\y\u\w\q\e\9\r\o\1\t\d\5\r\g\b\9\l\6\l\a\d\v\h\e\g\9\6\m\3\t\l\0\3\i\0\5\d\3\f\u\y\6\r\r\p\0\r\p\s\q\o\l\b\7\z\w\o\y\l\j\2\2\e\m\g\6\h\0\3\i\h\s\8\3\6\k\l\7\i\0\0\g\3\x\o\f\2\6\8\c\r\0\h\3\o\0\1\0\q\e\f\w\h\o\4\t\7\k\4\a\7\h\u\b\6\h\a\0\b\q\y\7\0\l\i\s\2\x\4\v\v\b\e\m\u\m\f\u\l\2\e\m\r\f\0\p\1\2\u\v\b\h\w\l\j\0\d\t\v\q\w\s\9\2\l\6\k\4\e\e\m\x\s\r\7\v\f\m\6\k\a\q\p\j\0\u\s\9\c\1\x\6\u\l\f\e\7\d\5\p\f\k\8\a\o\c\p\h\5\h\j\r\c\z\f\d\i\u\g\a\0\j\o\u\9\h\q\7\a\p\6\j\h\c\7\i\x\8\5\q\e\m\y\l\w\m\q\v\s\v\d\l\r\1\r\o\y\z\x\b\j\v\3\q\j\8\h\b\l\t\8\q\v\0\v\w\e\a\m\v\j\x\c\b\q\w\k\2\i\e\u\l\x\5\d\h\e\2\6\k\v\5\9\j\1\i\n\9\2\s\8\c\u\e\2\u\4\n\m\n\u\9\4\6\q\n\6\f\8\1\8\9\t\q\a\2\x\b\q\t\0\v\f\n\g\k\f\y\7\s\1\i\4\a\f\i\i\9\c\4\6\w\9\3\6\l\j\n\i\k\d\0\s\z\z\q\s\u\b\e\u\5\h\x\n\i\1\9\9\t ]] 00:05:45.239 00:05:45.239 real 0m1.099s 00:05:45.239 user 0m0.778s 00:05:45.239 sys 0m0.402s 00:05:45.239 13:33:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.239 13:33:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:45.239 ************************************ 00:05:45.239 END TEST dd_rw_offset 00:05:45.239 ************************************ 00:05:45.239 13:33:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:05:45.239 13:33:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:05:45.239 13:33:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:45.239 13:33:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:45.239 13:33:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:05:45.239 13:33:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:45.239 13:33:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:05:45.239 13:33:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:05:45.239 13:33:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:45.239 13:33:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:45.239 13:33:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:45.239 { 00:05:45.239 "subsystems": [ 00:05:45.239 { 00:05:45.239 "subsystem": "bdev", 00:05:45.239 "config": [ 00:05:45.239 { 00:05:45.239 "params": { 00:05:45.239 "trtype": "pcie", 00:05:45.239 "traddr": "0000:00:10.0", 00:05:45.239 "name": "Nvme0" 00:05:45.239 }, 00:05:45.239 "method": "bdev_nvme_attach_controller" 00:05:45.239 }, 00:05:45.239 { 00:05:45.239 "method": "bdev_wait_for_examine" 00:05:45.239 } 00:05:45.239 ] 00:05:45.239 } 00:05:45.239 ] 00:05:45.239 } 00:05:45.239 [2024-10-01 13:33:37.000016] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:45.239 [2024-10-01 13:33:37.000142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59930 ] 00:05:45.497 [2024-10-01 13:33:37.137837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.497 [2024-10-01 13:33:37.193217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.497 [2024-10-01 13:33:37.222743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.756  Copying: 1024/1024 [kB] (average 500 MBps) 00:05:45.756 00:05:45.756 13:33:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:45.756 00:05:45.756 real 0m16.095s 00:05:45.756 user 0m12.018s 00:05:45.756 sys 0m4.739s 00:05:45.756 13:33:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.756 13:33:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:45.756 ************************************ 00:05:45.756 END TEST spdk_dd_basic_rw 00:05:45.756 ************************************ 00:05:45.756 13:33:37 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:45.756 13:33:37 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.756 13:33:37 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.756 13:33:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:45.756 ************************************ 00:05:45.756 START TEST spdk_dd_posix 00:05:45.756 ************************************ 00:05:45.756 13:33:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:46.016 * Looking for test storage... 00:05:46.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.016 --rc genhtml_branch_coverage=1 00:05:46.016 --rc genhtml_function_coverage=1 00:05:46.016 --rc genhtml_legend=1 00:05:46.016 --rc geninfo_all_blocks=1 00:05:46.016 --rc geninfo_unexecuted_blocks=1 00:05:46.016 00:05:46.016 ' 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.016 --rc genhtml_branch_coverage=1 00:05:46.016 --rc genhtml_function_coverage=1 00:05:46.016 --rc genhtml_legend=1 00:05:46.016 --rc geninfo_all_blocks=1 00:05:46.016 --rc geninfo_unexecuted_blocks=1 00:05:46.016 00:05:46.016 ' 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.016 --rc genhtml_branch_coverage=1 00:05:46.016 --rc genhtml_function_coverage=1 00:05:46.016 --rc genhtml_legend=1 00:05:46.016 --rc geninfo_all_blocks=1 00:05:46.016 --rc geninfo_unexecuted_blocks=1 00:05:46.016 00:05:46.016 ' 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.016 --rc genhtml_branch_coverage=1 00:05:46.016 --rc genhtml_function_coverage=1 00:05:46.016 --rc genhtml_legend=1 00:05:46.016 --rc geninfo_all_blocks=1 00:05:46.016 --rc geninfo_unexecuted_blocks=1 00:05:46.016 00:05:46.016 ' 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.016 13:33:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:05:46.017 * First test run, liburing in use 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:46.017 ************************************ 00:05:46.017 START TEST dd_flag_append 00:05:46.017 ************************************ 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=g9ppkk8arb8jckhp7fu30fdtu7ahxxqf 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=y1yuz95fhyc5j7n20liegmw5loo78epv 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s g9ppkk8arb8jckhp7fu30fdtu7ahxxqf 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s y1yuz95fhyc5j7n20liegmw5loo78epv 00:05:46.017 13:33:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:46.017 [2024-10-01 13:33:37.793871] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:46.017 [2024-10-01 13:33:37.793957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60002 ] 00:05:46.276 [2024-10-01 13:33:37.925317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.276 [2024-10-01 13:33:37.975509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.276 [2024-10-01 13:33:38.003673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.535  Copying: 32/32 [B] (average 31 kBps) 00:05:46.535 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ y1yuz95fhyc5j7n20liegmw5loo78epvg9ppkk8arb8jckhp7fu30fdtu7ahxxqf == \y\1\y\u\z\9\5\f\h\y\c\5\j\7\n\2\0\l\i\e\g\m\w\5\l\o\o\7\8\e\p\v\g\9\p\p\k\k\8\a\r\b\8\j\c\k\h\p\7\f\u\3\0\f\d\t\u\7\a\h\x\x\q\f ]] 00:05:46.535 00:05:46.535 real 0m0.434s 00:05:46.535 user 0m0.239s 00:05:46.535 sys 0m0.163s 00:05:46.535 ************************************ 00:05:46.535 END TEST dd_flag_append 00:05:46.535 ************************************ 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:46.535 ************************************ 00:05:46.535 START TEST dd_flag_directory 00:05:46.535 ************************************ 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:46.535 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:46.535 [2024-10-01 13:33:38.288298] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:46.535 [2024-10-01 13:33:38.288398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60036 ] 00:05:46.794 [2024-10-01 13:33:38.426282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.794 [2024-10-01 13:33:38.487907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.794 [2024-10-01 13:33:38.517122] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.794 [2024-10-01 13:33:38.534328] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:46.794 [2024-10-01 13:33:38.534379] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:46.794 [2024-10-01 13:33:38.534408] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:46.794 [2024-10-01 13:33:38.593271] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:47.073 13:33:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:47.073 [2024-10-01 13:33:38.722967] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:47.073 [2024-10-01 13:33:38.723063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60040 ] 00:05:47.073 [2024-10-01 13:33:38.856142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.073 [2024-10-01 13:33:38.909112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.336 [2024-10-01 13:33:38.940336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.336 [2024-10-01 13:33:38.957188] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:47.336 [2024-10-01 13:33:38.957239] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:47.336 [2024-10-01 13:33:38.957268] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:47.336 [2024-10-01 13:33:39.015217] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.336 00:05:47.336 real 0m0.860s 00:05:47.336 user 0m0.454s 00:05:47.336 sys 0m0.198s 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:05:47.336 ************************************ 00:05:47.336 END TEST dd_flag_directory 00:05:47.336 ************************************ 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:47.336 ************************************ 00:05:47.336 START TEST dd_flag_nofollow 00:05:47.336 ************************************ 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:47.336 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:47.594 [2024-10-01 13:33:39.200625] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:47.595 [2024-10-01 13:33:39.201184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60063 ] 00:05:47.595 [2024-10-01 13:33:39.336966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.595 [2024-10-01 13:33:39.391438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.595 [2024-10-01 13:33:39.424830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.595 [2024-10-01 13:33:39.445306] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:47.595 [2024-10-01 13:33:39.445372] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:47.595 [2024-10-01 13:33:39.445400] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:47.854 [2024-10-01 13:33:39.513791] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:47.854 13:33:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:47.854 [2024-10-01 13:33:39.671038] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:47.854 [2024-10-01 13:33:39.671136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60078 ] 00:05:48.113 [2024-10-01 13:33:39.804064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.113 [2024-10-01 13:33:39.858827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.113 [2024-10-01 13:33:39.887350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.113 [2024-10-01 13:33:39.905274] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:48.113 [2024-10-01 13:33:39.905340] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:48.113 [2024-10-01 13:33:39.905371] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.113 [2024-10-01 13:33:39.965216] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:48.371 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:05:48.371 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.371 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:05:48.371 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:05:48.371 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:05:48.371 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.371 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:05:48.371 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:05:48.371 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:48.371 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:48.371 [2024-10-01 13:33:40.109365] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:48.371 [2024-10-01 13:33:40.109465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60080 ] 00:05:48.630 [2024-10-01 13:33:40.245084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.630 [2024-10-01 13:33:40.302292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.630 [2024-10-01 13:33:40.330133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.630  Copying: 512/512 [B] (average 500 kBps) 00:05:48.630 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 0mdovgeyv2bzz5v2svvmoud3hd2o5buhqgk65zrtqr6psakhgjstngqg8dnh5lotazqdelgm9btl8j60gftz3o99vrmtnb0z35n29871pygm2eked6oknf9waqg9ggyftb3vl222lu6xt4aogvyl7ej3czhuohl6y5wmkxzqz70bc5jkfxxwethbvkhh9nyxk3mfub9saidiyi5d3rjvtegu4kqjci4uixitu9fom8znbzihvhncj07ipjajmpkjyo2euz1jixjtikav9mldjbvd9849qd5rr0yl4kckmfph1fa6f9p9situwtyr4ot246mbd4j21a1mffkq5cpktjbv5bwybd3ho73ghy1bjk5mbei9mpsq5if8o87xvgdr7ii7e1myqee1owhpfbhvmsvz9yr6tvxykhkruceh2fxvcil5t22qb738u6hxema07kplp7iuaway9cgkkvnw38g1uofm4sd5olyl31y8q4pl36knnulpdo2ifdpftyb2 == \0\m\d\o\v\g\e\y\v\2\b\z\z\5\v\2\s\v\v\m\o\u\d\3\h\d\2\o\5\b\u\h\q\g\k\6\5\z\r\t\q\r\6\p\s\a\k\h\g\j\s\t\n\g\q\g\8\d\n\h\5\l\o\t\a\z\q\d\e\l\g\m\9\b\t\l\8\j\6\0\g\f\t\z\3\o\9\9\v\r\m\t\n\b\0\z\3\5\n\2\9\8\7\1\p\y\g\m\2\e\k\e\d\6\o\k\n\f\9\w\a\q\g\9\g\g\y\f\t\b\3\v\l\2\2\2\l\u\6\x\t\4\a\o\g\v\y\l\7\e\j\3\c\z\h\u\o\h\l\6\y\5\w\m\k\x\z\q\z\7\0\b\c\5\j\k\f\x\x\w\e\t\h\b\v\k\h\h\9\n\y\x\k\3\m\f\u\b\9\s\a\i\d\i\y\i\5\d\3\r\j\v\t\e\g\u\4\k\q\j\c\i\4\u\i\x\i\t\u\9\f\o\m\8\z\n\b\z\i\h\v\h\n\c\j\0\7\i\p\j\a\j\m\p\k\j\y\o\2\e\u\z\1\j\i\x\j\t\i\k\a\v\9\m\l\d\j\b\v\d\9\8\4\9\q\d\5\r\r\0\y\l\4\k\c\k\m\f\p\h\1\f\a\6\f\9\p\9\s\i\t\u\w\t\y\r\4\o\t\2\4\6\m\b\d\4\j\2\1\a\1\m\f\f\k\q\5\c\p\k\t\j\b\v\5\b\w\y\b\d\3\h\o\7\3\g\h\y\1\b\j\k\5\m\b\e\i\9\m\p\s\q\5\i\f\8\o\8\7\x\v\g\d\r\7\i\i\7\e\1\m\y\q\e\e\1\o\w\h\p\f\b\h\v\m\s\v\z\9\y\r\6\t\v\x\y\k\h\k\r\u\c\e\h\2\f\x\v\c\i\l\5\t\2\2\q\b\7\3\8\u\6\h\x\e\m\a\0\7\k\p\l\p\7\i\u\a\w\a\y\9\c\g\k\k\v\n\w\3\8\g\1\u\o\f\m\4\s\d\5\o\l\y\l\3\1\y\8\q\4\p\l\3\6\k\n\n\u\l\p\d\o\2\i\f\d\p\f\t\y\b\2 ]] 00:05:48.889 00:05:48.889 real 0m1.354s 00:05:48.889 user 0m0.724s 00:05:48.889 sys 0m0.390s 00:05:48.889 ************************************ 00:05:48.889 END TEST dd_flag_nofollow 00:05:48.889 ************************************ 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:48.889 ************************************ 00:05:48.889 START TEST dd_flag_noatime 00:05:48.889 ************************************ 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1727789620 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1727789620 00:05:48.889 13:33:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:05:49.823 13:33:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:49.823 [2024-10-01 13:33:41.621829] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:49.823 [2024-10-01 13:33:41.621932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60128 ] 00:05:50.082 [2024-10-01 13:33:41.762276] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.082 [2024-10-01 13:33:41.833283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.082 [2024-10-01 13:33:41.866402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.341  Copying: 512/512 [B] (average 500 kBps) 00:05:50.341 00:05:50.341 13:33:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:50.341 13:33:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1727789620 )) 00:05:50.341 13:33:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:50.341 13:33:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1727789620 )) 00:05:50.341 13:33:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:50.341 [2024-10-01 13:33:42.101631] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:50.341 [2024-10-01 13:33:42.101758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60136 ] 00:05:50.599 [2024-10-01 13:33:42.235255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.599 [2024-10-01 13:33:42.289220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.599 [2024-10-01 13:33:42.320475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.859  Copying: 512/512 [B] (average 500 kBps) 00:05:50.859 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1727789622 )) 00:05:50.859 00:05:50.859 real 0m1.947s 00:05:50.859 user 0m0.506s 00:05:50.859 sys 0m0.389s 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.859 ************************************ 00:05:50.859 END TEST dd_flag_noatime 00:05:50.859 ************************************ 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:50.859 ************************************ 00:05:50.859 START TEST dd_flags_misc 00:05:50.859 ************************************ 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:50.859 13:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:50.859 [2024-10-01 13:33:42.596409] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:50.859 [2024-10-01 13:33:42.596508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60170 ] 00:05:51.118 [2024-10-01 13:33:42.720639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.118 [2024-10-01 13:33:42.772528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.118 [2024-10-01 13:33:42.799614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.118  Copying: 512/512 [B] (average 500 kBps) 00:05:51.118 00:05:51.119 13:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rdtbugjyrxdsfrf240cjlqnc2j9lr3iqsk0ahft50u7g7kwwj24xmgs19e7qh6cddkeh691jzh72wtzp3zri1ismq1y27ghilfe1vx5vxv2dnln9fkosa9vq2b9e56kjixehoki8nmwo6tcadupx2aobrakigsh8pycqit1oobjqeymu3o4qsxqdujw6vihy25fhx087mt1cpu65fbl4cej3mwpxz0c500hrk7ai0hc58dwuhekx2uvnvu39w1rrvfpue273csx5gy2jsrx6ledvuky4hct20nvawema713bodg96newjklx3gjjxtlov71g11ksf0hntltnn20mhrbqc4dsg4388jx5vfpsfhzsdjrr0iua1zv713j6b44ucjpa7routdyi5pz7k8neu0xss4vfj6ltotuot5ik0kqjqb5bwvnzp96y0h6jqabv6gs1j3wmsk3vmla98xzkweawb8fct4vbjoojm4fyfs8fhl1ylnyz2qgfksfad8mo == \r\d\t\b\u\g\j\y\r\x\d\s\f\r\f\2\4\0\c\j\l\q\n\c\2\j\9\l\r\3\i\q\s\k\0\a\h\f\t\5\0\u\7\g\7\k\w\w\j\2\4\x\m\g\s\1\9\e\7\q\h\6\c\d\d\k\e\h\6\9\1\j\z\h\7\2\w\t\z\p\3\z\r\i\1\i\s\m\q\1\y\2\7\g\h\i\l\f\e\1\v\x\5\v\x\v\2\d\n\l\n\9\f\k\o\s\a\9\v\q\2\b\9\e\5\6\k\j\i\x\e\h\o\k\i\8\n\m\w\o\6\t\c\a\d\u\p\x\2\a\o\b\r\a\k\i\g\s\h\8\p\y\c\q\i\t\1\o\o\b\j\q\e\y\m\u\3\o\4\q\s\x\q\d\u\j\w\6\v\i\h\y\2\5\f\h\x\0\8\7\m\t\1\c\p\u\6\5\f\b\l\4\c\e\j\3\m\w\p\x\z\0\c\5\0\0\h\r\k\7\a\i\0\h\c\5\8\d\w\u\h\e\k\x\2\u\v\n\v\u\3\9\w\1\r\r\v\f\p\u\e\2\7\3\c\s\x\5\g\y\2\j\s\r\x\6\l\e\d\v\u\k\y\4\h\c\t\2\0\n\v\a\w\e\m\a\7\1\3\b\o\d\g\9\6\n\e\w\j\k\l\x\3\g\j\j\x\t\l\o\v\7\1\g\1\1\k\s\f\0\h\n\t\l\t\n\n\2\0\m\h\r\b\q\c\4\d\s\g\4\3\8\8\j\x\5\v\f\p\s\f\h\z\s\d\j\r\r\0\i\u\a\1\z\v\7\1\3\j\6\b\4\4\u\c\j\p\a\7\r\o\u\t\d\y\i\5\p\z\7\k\8\n\e\u\0\x\s\s\4\v\f\j\6\l\t\o\t\u\o\t\5\i\k\0\k\q\j\q\b\5\b\w\v\n\z\p\9\6\y\0\h\6\j\q\a\b\v\6\g\s\1\j\3\w\m\s\k\3\v\m\l\a\9\8\x\z\k\w\e\a\w\b\8\f\c\t\4\v\b\j\o\o\j\m\4\f\y\f\s\8\f\h\l\1\y\l\n\y\z\2\q\g\f\k\s\f\a\d\8\m\o ]] 00:05:51.119 13:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:51.119 13:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:51.378 [2024-10-01 13:33:43.016382] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:51.378 [2024-10-01 13:33:43.016503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60174 ] 00:05:51.378 [2024-10-01 13:33:43.148502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.378 [2024-10-01 13:33:43.197334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.378 [2024-10-01 13:33:43.224401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.637  Copying: 512/512 [B] (average 500 kBps) 00:05:51.637 00:05:51.637 13:33:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rdtbugjyrxdsfrf240cjlqnc2j9lr3iqsk0ahft50u7g7kwwj24xmgs19e7qh6cddkeh691jzh72wtzp3zri1ismq1y27ghilfe1vx5vxv2dnln9fkosa9vq2b9e56kjixehoki8nmwo6tcadupx2aobrakigsh8pycqit1oobjqeymu3o4qsxqdujw6vihy25fhx087mt1cpu65fbl4cej3mwpxz0c500hrk7ai0hc58dwuhekx2uvnvu39w1rrvfpue273csx5gy2jsrx6ledvuky4hct20nvawema713bodg96newjklx3gjjxtlov71g11ksf0hntltnn20mhrbqc4dsg4388jx5vfpsfhzsdjrr0iua1zv713j6b44ucjpa7routdyi5pz7k8neu0xss4vfj6ltotuot5ik0kqjqb5bwvnzp96y0h6jqabv6gs1j3wmsk3vmla98xzkweawb8fct4vbjoojm4fyfs8fhl1ylnyz2qgfksfad8mo == \r\d\t\b\u\g\j\y\r\x\d\s\f\r\f\2\4\0\c\j\l\q\n\c\2\j\9\l\r\3\i\q\s\k\0\a\h\f\t\5\0\u\7\g\7\k\w\w\j\2\4\x\m\g\s\1\9\e\7\q\h\6\c\d\d\k\e\h\6\9\1\j\z\h\7\2\w\t\z\p\3\z\r\i\1\i\s\m\q\1\y\2\7\g\h\i\l\f\e\1\v\x\5\v\x\v\2\d\n\l\n\9\f\k\o\s\a\9\v\q\2\b\9\e\5\6\k\j\i\x\e\h\o\k\i\8\n\m\w\o\6\t\c\a\d\u\p\x\2\a\o\b\r\a\k\i\g\s\h\8\p\y\c\q\i\t\1\o\o\b\j\q\e\y\m\u\3\o\4\q\s\x\q\d\u\j\w\6\v\i\h\y\2\5\f\h\x\0\8\7\m\t\1\c\p\u\6\5\f\b\l\4\c\e\j\3\m\w\p\x\z\0\c\5\0\0\h\r\k\7\a\i\0\h\c\5\8\d\w\u\h\e\k\x\2\u\v\n\v\u\3\9\w\1\r\r\v\f\p\u\e\2\7\3\c\s\x\5\g\y\2\j\s\r\x\6\l\e\d\v\u\k\y\4\h\c\t\2\0\n\v\a\w\e\m\a\7\1\3\b\o\d\g\9\6\n\e\w\j\k\l\x\3\g\j\j\x\t\l\o\v\7\1\g\1\1\k\s\f\0\h\n\t\l\t\n\n\2\0\m\h\r\b\q\c\4\d\s\g\4\3\8\8\j\x\5\v\f\p\s\f\h\z\s\d\j\r\r\0\i\u\a\1\z\v\7\1\3\j\6\b\4\4\u\c\j\p\a\7\r\o\u\t\d\y\i\5\p\z\7\k\8\n\e\u\0\x\s\s\4\v\f\j\6\l\t\o\t\u\o\t\5\i\k\0\k\q\j\q\b\5\b\w\v\n\z\p\9\6\y\0\h\6\j\q\a\b\v\6\g\s\1\j\3\w\m\s\k\3\v\m\l\a\9\8\x\z\k\w\e\a\w\b\8\f\c\t\4\v\b\j\o\o\j\m\4\f\y\f\s\8\f\h\l\1\y\l\n\y\z\2\q\g\f\k\s\f\a\d\8\m\o ]] 00:05:51.637 13:33:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:51.637 13:33:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:51.637 [2024-10-01 13:33:43.446633] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:51.637 [2024-10-01 13:33:43.446724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60188 ] 00:05:51.897 [2024-10-01 13:33:43.585799] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.897 [2024-10-01 13:33:43.638661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.897 [2024-10-01 13:33:43.665970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.157  Copying: 512/512 [B] (average 83 kBps) 00:05:52.157 00:05:52.157 13:33:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rdtbugjyrxdsfrf240cjlqnc2j9lr3iqsk0ahft50u7g7kwwj24xmgs19e7qh6cddkeh691jzh72wtzp3zri1ismq1y27ghilfe1vx5vxv2dnln9fkosa9vq2b9e56kjixehoki8nmwo6tcadupx2aobrakigsh8pycqit1oobjqeymu3o4qsxqdujw6vihy25fhx087mt1cpu65fbl4cej3mwpxz0c500hrk7ai0hc58dwuhekx2uvnvu39w1rrvfpue273csx5gy2jsrx6ledvuky4hct20nvawema713bodg96newjklx3gjjxtlov71g11ksf0hntltnn20mhrbqc4dsg4388jx5vfpsfhzsdjrr0iua1zv713j6b44ucjpa7routdyi5pz7k8neu0xss4vfj6ltotuot5ik0kqjqb5bwvnzp96y0h6jqabv6gs1j3wmsk3vmla98xzkweawb8fct4vbjoojm4fyfs8fhl1ylnyz2qgfksfad8mo == \r\d\t\b\u\g\j\y\r\x\d\s\f\r\f\2\4\0\c\j\l\q\n\c\2\j\9\l\r\3\i\q\s\k\0\a\h\f\t\5\0\u\7\g\7\k\w\w\j\2\4\x\m\g\s\1\9\e\7\q\h\6\c\d\d\k\e\h\6\9\1\j\z\h\7\2\w\t\z\p\3\z\r\i\1\i\s\m\q\1\y\2\7\g\h\i\l\f\e\1\v\x\5\v\x\v\2\d\n\l\n\9\f\k\o\s\a\9\v\q\2\b\9\e\5\6\k\j\i\x\e\h\o\k\i\8\n\m\w\o\6\t\c\a\d\u\p\x\2\a\o\b\r\a\k\i\g\s\h\8\p\y\c\q\i\t\1\o\o\b\j\q\e\y\m\u\3\o\4\q\s\x\q\d\u\j\w\6\v\i\h\y\2\5\f\h\x\0\8\7\m\t\1\c\p\u\6\5\f\b\l\4\c\e\j\3\m\w\p\x\z\0\c\5\0\0\h\r\k\7\a\i\0\h\c\5\8\d\w\u\h\e\k\x\2\u\v\n\v\u\3\9\w\1\r\r\v\f\p\u\e\2\7\3\c\s\x\5\g\y\2\j\s\r\x\6\l\e\d\v\u\k\y\4\h\c\t\2\0\n\v\a\w\e\m\a\7\1\3\b\o\d\g\9\6\n\e\w\j\k\l\x\3\g\j\j\x\t\l\o\v\7\1\g\1\1\k\s\f\0\h\n\t\l\t\n\n\2\0\m\h\r\b\q\c\4\d\s\g\4\3\8\8\j\x\5\v\f\p\s\f\h\z\s\d\j\r\r\0\i\u\a\1\z\v\7\1\3\j\6\b\4\4\u\c\j\p\a\7\r\o\u\t\d\y\i\5\p\z\7\k\8\n\e\u\0\x\s\s\4\v\f\j\6\l\t\o\t\u\o\t\5\i\k\0\k\q\j\q\b\5\b\w\v\n\z\p\9\6\y\0\h\6\j\q\a\b\v\6\g\s\1\j\3\w\m\s\k\3\v\m\l\a\9\8\x\z\k\w\e\a\w\b\8\f\c\t\4\v\b\j\o\o\j\m\4\f\y\f\s\8\f\h\l\1\y\l\n\y\z\2\q\g\f\k\s\f\a\d\8\m\o ]] 00:05:52.157 13:33:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:52.157 13:33:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:52.157 [2024-10-01 13:33:43.880167] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:52.157 [2024-10-01 13:33:43.880266] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60193 ] 00:05:52.157 [2024-10-01 13:33:44.015138] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.416 [2024-10-01 13:33:44.064560] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.416 [2024-10-01 13:33:44.091729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.416  Copying: 512/512 [B] (average 125 kBps) 00:05:52.416 00:05:52.416 13:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rdtbugjyrxdsfrf240cjlqnc2j9lr3iqsk0ahft50u7g7kwwj24xmgs19e7qh6cddkeh691jzh72wtzp3zri1ismq1y27ghilfe1vx5vxv2dnln9fkosa9vq2b9e56kjixehoki8nmwo6tcadupx2aobrakigsh8pycqit1oobjqeymu3o4qsxqdujw6vihy25fhx087mt1cpu65fbl4cej3mwpxz0c500hrk7ai0hc58dwuhekx2uvnvu39w1rrvfpue273csx5gy2jsrx6ledvuky4hct20nvawema713bodg96newjklx3gjjxtlov71g11ksf0hntltnn20mhrbqc4dsg4388jx5vfpsfhzsdjrr0iua1zv713j6b44ucjpa7routdyi5pz7k8neu0xss4vfj6ltotuot5ik0kqjqb5bwvnzp96y0h6jqabv6gs1j3wmsk3vmla98xzkweawb8fct4vbjoojm4fyfs8fhl1ylnyz2qgfksfad8mo == \r\d\t\b\u\g\j\y\r\x\d\s\f\r\f\2\4\0\c\j\l\q\n\c\2\j\9\l\r\3\i\q\s\k\0\a\h\f\t\5\0\u\7\g\7\k\w\w\j\2\4\x\m\g\s\1\9\e\7\q\h\6\c\d\d\k\e\h\6\9\1\j\z\h\7\2\w\t\z\p\3\z\r\i\1\i\s\m\q\1\y\2\7\g\h\i\l\f\e\1\v\x\5\v\x\v\2\d\n\l\n\9\f\k\o\s\a\9\v\q\2\b\9\e\5\6\k\j\i\x\e\h\o\k\i\8\n\m\w\o\6\t\c\a\d\u\p\x\2\a\o\b\r\a\k\i\g\s\h\8\p\y\c\q\i\t\1\o\o\b\j\q\e\y\m\u\3\o\4\q\s\x\q\d\u\j\w\6\v\i\h\y\2\5\f\h\x\0\8\7\m\t\1\c\p\u\6\5\f\b\l\4\c\e\j\3\m\w\p\x\z\0\c\5\0\0\h\r\k\7\a\i\0\h\c\5\8\d\w\u\h\e\k\x\2\u\v\n\v\u\3\9\w\1\r\r\v\f\p\u\e\2\7\3\c\s\x\5\g\y\2\j\s\r\x\6\l\e\d\v\u\k\y\4\h\c\t\2\0\n\v\a\w\e\m\a\7\1\3\b\o\d\g\9\6\n\e\w\j\k\l\x\3\g\j\j\x\t\l\o\v\7\1\g\1\1\k\s\f\0\h\n\t\l\t\n\n\2\0\m\h\r\b\q\c\4\d\s\g\4\3\8\8\j\x\5\v\f\p\s\f\h\z\s\d\j\r\r\0\i\u\a\1\z\v\7\1\3\j\6\b\4\4\u\c\j\p\a\7\r\o\u\t\d\y\i\5\p\z\7\k\8\n\e\u\0\x\s\s\4\v\f\j\6\l\t\o\t\u\o\t\5\i\k\0\k\q\j\q\b\5\b\w\v\n\z\p\9\6\y\0\h\6\j\q\a\b\v\6\g\s\1\j\3\w\m\s\k\3\v\m\l\a\9\8\x\z\k\w\e\a\w\b\8\f\c\t\4\v\b\j\o\o\j\m\4\f\y\f\s\8\f\h\l\1\y\l\n\y\z\2\q\g\f\k\s\f\a\d\8\m\o ]] 00:05:52.416 13:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:52.416 13:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:05:52.416 13:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:05:52.416 13:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:52.416 13:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:52.416 13:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:52.675 [2024-10-01 13:33:44.315515] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:52.675 [2024-10-01 13:33:44.315663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60197 ] 00:05:52.675 [2024-10-01 13:33:44.452202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.675 [2024-10-01 13:33:44.504501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.675 [2024-10-01 13:33:44.532002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.934  Copying: 512/512 [B] (average 500 kBps) 00:05:52.934 00:05:52.934 13:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ opexhgcck88rmpfnjy5aloqqlaukc8bpqx58ses3wwo7f3dh6birh31byzj6ptfkphf04mon11qbncrbj7rzzepsowkhmhdgnadehggaodhx3crtp9czp4ch6ym774d929t8tn9jl6413m2u0wxjc4m046hmhj21o9j55z000aiolgqz2sdlg9yp29yosj8hr0h21axy1pirlgunl5pmzbj8ofb6x2jnjb3tth6foyb9551nuscbpoycr2h7wsc66m0tir5carwrdlt8mi7ztlfh1u0p54foqvxsey7tizknm82nr4m8d41meg1hr19xxbbeaq4a8wocl7o66u84pyaqivhr9g48z17jr6m7u9c4tftm6ku9hzjaw83n6wc6j98rogk7r1s2fcmvdnsfzvvnfmk2ueqrccdsr2vabvnluot2had0rgntl9q1s7rzj4awex7cjj17atjlu9ebnobeqbbhyh73avr7q909xu8rua6ykqxcngbep91y7r1k == \o\p\e\x\h\g\c\c\k\8\8\r\m\p\f\n\j\y\5\a\l\o\q\q\l\a\u\k\c\8\b\p\q\x\5\8\s\e\s\3\w\w\o\7\f\3\d\h\6\b\i\r\h\3\1\b\y\z\j\6\p\t\f\k\p\h\f\0\4\m\o\n\1\1\q\b\n\c\r\b\j\7\r\z\z\e\p\s\o\w\k\h\m\h\d\g\n\a\d\e\h\g\g\a\o\d\h\x\3\c\r\t\p\9\c\z\p\4\c\h\6\y\m\7\7\4\d\9\2\9\t\8\t\n\9\j\l\6\4\1\3\m\2\u\0\w\x\j\c\4\m\0\4\6\h\m\h\j\2\1\o\9\j\5\5\z\0\0\0\a\i\o\l\g\q\z\2\s\d\l\g\9\y\p\2\9\y\o\s\j\8\h\r\0\h\2\1\a\x\y\1\p\i\r\l\g\u\n\l\5\p\m\z\b\j\8\o\f\b\6\x\2\j\n\j\b\3\t\t\h\6\f\o\y\b\9\5\5\1\n\u\s\c\b\p\o\y\c\r\2\h\7\w\s\c\6\6\m\0\t\i\r\5\c\a\r\w\r\d\l\t\8\m\i\7\z\t\l\f\h\1\u\0\p\5\4\f\o\q\v\x\s\e\y\7\t\i\z\k\n\m\8\2\n\r\4\m\8\d\4\1\m\e\g\1\h\r\1\9\x\x\b\b\e\a\q\4\a\8\w\o\c\l\7\o\6\6\u\8\4\p\y\a\q\i\v\h\r\9\g\4\8\z\1\7\j\r\6\m\7\u\9\c\4\t\f\t\m\6\k\u\9\h\z\j\a\w\8\3\n\6\w\c\6\j\9\8\r\o\g\k\7\r\1\s\2\f\c\m\v\d\n\s\f\z\v\v\n\f\m\k\2\u\e\q\r\c\c\d\s\r\2\v\a\b\v\n\l\u\o\t\2\h\a\d\0\r\g\n\t\l\9\q\1\s\7\r\z\j\4\a\w\e\x\7\c\j\j\1\7\a\t\j\l\u\9\e\b\n\o\b\e\q\b\b\h\y\h\7\3\a\v\r\7\q\9\0\9\x\u\8\r\u\a\6\y\k\q\x\c\n\g\b\e\p\9\1\y\7\r\1\k ]] 00:05:52.934 13:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:52.934 13:33:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:52.934 [2024-10-01 13:33:44.744690] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:52.934 [2024-10-01 13:33:44.744794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60212 ] 00:05:53.193 [2024-10-01 13:33:44.872325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.193 [2024-10-01 13:33:44.920987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.193 [2024-10-01 13:33:44.948018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.451  Copying: 512/512 [B] (average 500 kBps) 00:05:53.451 00:05:53.451 13:33:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ opexhgcck88rmpfnjy5aloqqlaukc8bpqx58ses3wwo7f3dh6birh31byzj6ptfkphf04mon11qbncrbj7rzzepsowkhmhdgnadehggaodhx3crtp9czp4ch6ym774d929t8tn9jl6413m2u0wxjc4m046hmhj21o9j55z000aiolgqz2sdlg9yp29yosj8hr0h21axy1pirlgunl5pmzbj8ofb6x2jnjb3tth6foyb9551nuscbpoycr2h7wsc66m0tir5carwrdlt8mi7ztlfh1u0p54foqvxsey7tizknm82nr4m8d41meg1hr19xxbbeaq4a8wocl7o66u84pyaqivhr9g48z17jr6m7u9c4tftm6ku9hzjaw83n6wc6j98rogk7r1s2fcmvdnsfzvvnfmk2ueqrccdsr2vabvnluot2had0rgntl9q1s7rzj4awex7cjj17atjlu9ebnobeqbbhyh73avr7q909xu8rua6ykqxcngbep91y7r1k == \o\p\e\x\h\g\c\c\k\8\8\r\m\p\f\n\j\y\5\a\l\o\q\q\l\a\u\k\c\8\b\p\q\x\5\8\s\e\s\3\w\w\o\7\f\3\d\h\6\b\i\r\h\3\1\b\y\z\j\6\p\t\f\k\p\h\f\0\4\m\o\n\1\1\q\b\n\c\r\b\j\7\r\z\z\e\p\s\o\w\k\h\m\h\d\g\n\a\d\e\h\g\g\a\o\d\h\x\3\c\r\t\p\9\c\z\p\4\c\h\6\y\m\7\7\4\d\9\2\9\t\8\t\n\9\j\l\6\4\1\3\m\2\u\0\w\x\j\c\4\m\0\4\6\h\m\h\j\2\1\o\9\j\5\5\z\0\0\0\a\i\o\l\g\q\z\2\s\d\l\g\9\y\p\2\9\y\o\s\j\8\h\r\0\h\2\1\a\x\y\1\p\i\r\l\g\u\n\l\5\p\m\z\b\j\8\o\f\b\6\x\2\j\n\j\b\3\t\t\h\6\f\o\y\b\9\5\5\1\n\u\s\c\b\p\o\y\c\r\2\h\7\w\s\c\6\6\m\0\t\i\r\5\c\a\r\w\r\d\l\t\8\m\i\7\z\t\l\f\h\1\u\0\p\5\4\f\o\q\v\x\s\e\y\7\t\i\z\k\n\m\8\2\n\r\4\m\8\d\4\1\m\e\g\1\h\r\1\9\x\x\b\b\e\a\q\4\a\8\w\o\c\l\7\o\6\6\u\8\4\p\y\a\q\i\v\h\r\9\g\4\8\z\1\7\j\r\6\m\7\u\9\c\4\t\f\t\m\6\k\u\9\h\z\j\a\w\8\3\n\6\w\c\6\j\9\8\r\o\g\k\7\r\1\s\2\f\c\m\v\d\n\s\f\z\v\v\n\f\m\k\2\u\e\q\r\c\c\d\s\r\2\v\a\b\v\n\l\u\o\t\2\h\a\d\0\r\g\n\t\l\9\q\1\s\7\r\z\j\4\a\w\e\x\7\c\j\j\1\7\a\t\j\l\u\9\e\b\n\o\b\e\q\b\b\h\y\h\7\3\a\v\r\7\q\9\0\9\x\u\8\r\u\a\6\y\k\q\x\c\n\g\b\e\p\9\1\y\7\r\1\k ]] 00:05:53.451 13:33:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:53.451 13:33:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:53.451 [2024-10-01 13:33:45.156096] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:53.451 [2024-10-01 13:33:45.156202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60216 ] 00:05:53.451 [2024-10-01 13:33:45.290137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.710 [2024-10-01 13:33:45.342902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.710 [2024-10-01 13:33:45.373250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.710  Copying: 512/512 [B] (average 166 kBps) 00:05:53.710 00:05:53.711 13:33:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ opexhgcck88rmpfnjy5aloqqlaukc8bpqx58ses3wwo7f3dh6birh31byzj6ptfkphf04mon11qbncrbj7rzzepsowkhmhdgnadehggaodhx3crtp9czp4ch6ym774d929t8tn9jl6413m2u0wxjc4m046hmhj21o9j55z000aiolgqz2sdlg9yp29yosj8hr0h21axy1pirlgunl5pmzbj8ofb6x2jnjb3tth6foyb9551nuscbpoycr2h7wsc66m0tir5carwrdlt8mi7ztlfh1u0p54foqvxsey7tizknm82nr4m8d41meg1hr19xxbbeaq4a8wocl7o66u84pyaqivhr9g48z17jr6m7u9c4tftm6ku9hzjaw83n6wc6j98rogk7r1s2fcmvdnsfzvvnfmk2ueqrccdsr2vabvnluot2had0rgntl9q1s7rzj4awex7cjj17atjlu9ebnobeqbbhyh73avr7q909xu8rua6ykqxcngbep91y7r1k == \o\p\e\x\h\g\c\c\k\8\8\r\m\p\f\n\j\y\5\a\l\o\q\q\l\a\u\k\c\8\b\p\q\x\5\8\s\e\s\3\w\w\o\7\f\3\d\h\6\b\i\r\h\3\1\b\y\z\j\6\p\t\f\k\p\h\f\0\4\m\o\n\1\1\q\b\n\c\r\b\j\7\r\z\z\e\p\s\o\w\k\h\m\h\d\g\n\a\d\e\h\g\g\a\o\d\h\x\3\c\r\t\p\9\c\z\p\4\c\h\6\y\m\7\7\4\d\9\2\9\t\8\t\n\9\j\l\6\4\1\3\m\2\u\0\w\x\j\c\4\m\0\4\6\h\m\h\j\2\1\o\9\j\5\5\z\0\0\0\a\i\o\l\g\q\z\2\s\d\l\g\9\y\p\2\9\y\o\s\j\8\h\r\0\h\2\1\a\x\y\1\p\i\r\l\g\u\n\l\5\p\m\z\b\j\8\o\f\b\6\x\2\j\n\j\b\3\t\t\h\6\f\o\y\b\9\5\5\1\n\u\s\c\b\p\o\y\c\r\2\h\7\w\s\c\6\6\m\0\t\i\r\5\c\a\r\w\r\d\l\t\8\m\i\7\z\t\l\f\h\1\u\0\p\5\4\f\o\q\v\x\s\e\y\7\t\i\z\k\n\m\8\2\n\r\4\m\8\d\4\1\m\e\g\1\h\r\1\9\x\x\b\b\e\a\q\4\a\8\w\o\c\l\7\o\6\6\u\8\4\p\y\a\q\i\v\h\r\9\g\4\8\z\1\7\j\r\6\m\7\u\9\c\4\t\f\t\m\6\k\u\9\h\z\j\a\w\8\3\n\6\w\c\6\j\9\8\r\o\g\k\7\r\1\s\2\f\c\m\v\d\n\s\f\z\v\v\n\f\m\k\2\u\e\q\r\c\c\d\s\r\2\v\a\b\v\n\l\u\o\t\2\h\a\d\0\r\g\n\t\l\9\q\1\s\7\r\z\j\4\a\w\e\x\7\c\j\j\1\7\a\t\j\l\u\9\e\b\n\o\b\e\q\b\b\h\y\h\7\3\a\v\r\7\q\9\0\9\x\u\8\r\u\a\6\y\k\q\x\c\n\g\b\e\p\9\1\y\7\r\1\k ]] 00:05:53.711 13:33:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:53.711 13:33:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:53.969 [2024-10-01 13:33:45.594720] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:53.969 [2024-10-01 13:33:45.594812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60231 ] 00:05:53.969 [2024-10-01 13:33:45.731144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.969 [2024-10-01 13:33:45.780695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.969 [2024-10-01 13:33:45.807586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.227  Copying: 512/512 [B] (average 250 kBps) 00:05:54.227 00:05:54.227 13:33:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ opexhgcck88rmpfnjy5aloqqlaukc8bpqx58ses3wwo7f3dh6birh31byzj6ptfkphf04mon11qbncrbj7rzzepsowkhmhdgnadehggaodhx3crtp9czp4ch6ym774d929t8tn9jl6413m2u0wxjc4m046hmhj21o9j55z000aiolgqz2sdlg9yp29yosj8hr0h21axy1pirlgunl5pmzbj8ofb6x2jnjb3tth6foyb9551nuscbpoycr2h7wsc66m0tir5carwrdlt8mi7ztlfh1u0p54foqvxsey7tizknm82nr4m8d41meg1hr19xxbbeaq4a8wocl7o66u84pyaqivhr9g48z17jr6m7u9c4tftm6ku9hzjaw83n6wc6j98rogk7r1s2fcmvdnsfzvvnfmk2ueqrccdsr2vabvnluot2had0rgntl9q1s7rzj4awex7cjj17atjlu9ebnobeqbbhyh73avr7q909xu8rua6ykqxcngbep91y7r1k == \o\p\e\x\h\g\c\c\k\8\8\r\m\p\f\n\j\y\5\a\l\o\q\q\l\a\u\k\c\8\b\p\q\x\5\8\s\e\s\3\w\w\o\7\f\3\d\h\6\b\i\r\h\3\1\b\y\z\j\6\p\t\f\k\p\h\f\0\4\m\o\n\1\1\q\b\n\c\r\b\j\7\r\z\z\e\p\s\o\w\k\h\m\h\d\g\n\a\d\e\h\g\g\a\o\d\h\x\3\c\r\t\p\9\c\z\p\4\c\h\6\y\m\7\7\4\d\9\2\9\t\8\t\n\9\j\l\6\4\1\3\m\2\u\0\w\x\j\c\4\m\0\4\6\h\m\h\j\2\1\o\9\j\5\5\z\0\0\0\a\i\o\l\g\q\z\2\s\d\l\g\9\y\p\2\9\y\o\s\j\8\h\r\0\h\2\1\a\x\y\1\p\i\r\l\g\u\n\l\5\p\m\z\b\j\8\o\f\b\6\x\2\j\n\j\b\3\t\t\h\6\f\o\y\b\9\5\5\1\n\u\s\c\b\p\o\y\c\r\2\h\7\w\s\c\6\6\m\0\t\i\r\5\c\a\r\w\r\d\l\t\8\m\i\7\z\t\l\f\h\1\u\0\p\5\4\f\o\q\v\x\s\e\y\7\t\i\z\k\n\m\8\2\n\r\4\m\8\d\4\1\m\e\g\1\h\r\1\9\x\x\b\b\e\a\q\4\a\8\w\o\c\l\7\o\6\6\u\8\4\p\y\a\q\i\v\h\r\9\g\4\8\z\1\7\j\r\6\m\7\u\9\c\4\t\f\t\m\6\k\u\9\h\z\j\a\w\8\3\n\6\w\c\6\j\9\8\r\o\g\k\7\r\1\s\2\f\c\m\v\d\n\s\f\z\v\v\n\f\m\k\2\u\e\q\r\c\c\d\s\r\2\v\a\b\v\n\l\u\o\t\2\h\a\d\0\r\g\n\t\l\9\q\1\s\7\r\z\j\4\a\w\e\x\7\c\j\j\1\7\a\t\j\l\u\9\e\b\n\o\b\e\q\b\b\h\y\h\7\3\a\v\r\7\q\9\0\9\x\u\8\r\u\a\6\y\k\q\x\c\n\g\b\e\p\9\1\y\7\r\1\k ]] 00:05:54.227 00:05:54.227 real 0m3.429s 00:05:54.227 user 0m1.807s 00:05:54.227 sys 0m1.415s 00:05:54.227 13:33:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.227 ************************************ 00:05:54.227 END TEST dd_flags_misc 00:05:54.227 13:33:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:54.227 ************************************ 00:05:54.227 13:33:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:05:54.227 13:33:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:05:54.227 * Second test run, disabling liburing, forcing AIO 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:54.228 ************************************ 00:05:54.228 START TEST dd_flag_append_forced_aio 00:05:54.228 ************************************ 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=f3a10hh6lui4kod34j7tq278dxvmqi49 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=grt7hieqx6uypu1jjc0gzm38mm1hxcy7 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s f3a10hh6lui4kod34j7tq278dxvmqi49 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s grt7hieqx6uypu1jjc0gzm38mm1hxcy7 00:05:54.228 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:54.487 [2024-10-01 13:33:46.090820] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:54.487 [2024-10-01 13:33:46.090935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60254 ] 00:05:54.487 [2024-10-01 13:33:46.228381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.487 [2024-10-01 13:33:46.278944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.487 [2024-10-01 13:33:46.307019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.746  Copying: 32/32 [B] (average 31 kBps) 00:05:54.746 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ grt7hieqx6uypu1jjc0gzm38mm1hxcy7f3a10hh6lui4kod34j7tq278dxvmqi49 == \g\r\t\7\h\i\e\q\x\6\u\y\p\u\1\j\j\c\0\g\z\m\3\8\m\m\1\h\x\c\y\7\f\3\a\1\0\h\h\6\l\u\i\4\k\o\d\3\4\j\7\t\q\2\7\8\d\x\v\m\q\i\4\9 ]] 00:05:54.746 00:05:54.746 real 0m0.461s 00:05:54.746 user 0m0.235s 00:05:54.746 sys 0m0.100s 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.746 ************************************ 00:05:54.746 END TEST dd_flag_append_forced_aio 00:05:54.746 ************************************ 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:54.746 ************************************ 00:05:54.746 START TEST dd_flag_directory_forced_aio 00:05:54.746 ************************************ 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:54.746 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:54.746 [2024-10-01 13:33:46.599957] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:54.746 [2024-10-01 13:33:46.600066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60285 ] 00:05:55.005 [2024-10-01 13:33:46.738759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.005 [2024-10-01 13:33:46.791330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.005 [2024-10-01 13:33:46.821054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.006 [2024-10-01 13:33:46.839635] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:55.006 [2024-10-01 13:33:46.839718] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:55.006 [2024-10-01 13:33:46.839733] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.265 [2024-10-01 13:33:46.903578] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:55.265 13:33:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:55.265 [2024-10-01 13:33:47.044607] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:55.265 [2024-10-01 13:33:47.044703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60290 ] 00:05:55.524 [2024-10-01 13:33:47.179123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.524 [2024-10-01 13:33:47.232117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.524 [2024-10-01 13:33:47.261191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.524 [2024-10-01 13:33:47.279662] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:55.524 [2024-10-01 13:33:47.279746] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:55.524 [2024-10-01 13:33:47.279759] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.524 [2024-10-01 13:33:47.343021] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:55.781 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:05:55.781 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.781 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:05:55.781 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:05:55.781 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:05:55.781 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.781 00:05:55.781 real 0m0.893s 00:05:55.781 user 0m0.475s 00:05:55.781 sys 0m0.209s 00:05:55.781 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.781 ************************************ 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:55.782 END TEST dd_flag_directory_forced_aio 00:05:55.782 ************************************ 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:55.782 ************************************ 00:05:55.782 START TEST dd_flag_nofollow_forced_aio 00:05:55.782 ************************************ 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:55.782 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:55.782 [2024-10-01 13:33:47.541252] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:55.782 [2024-10-01 13:33:47.541334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60319 ] 00:05:56.040 [2024-10-01 13:33:47.670566] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.040 [2024-10-01 13:33:47.731274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.040 [2024-10-01 13:33:47.760157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.040 [2024-10-01 13:33:47.777625] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:56.040 [2024-10-01 13:33:47.777688] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:56.040 [2024-10-01 13:33:47.777717] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.040 [2024-10-01 13:33:47.838848] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:56.299 13:33:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:56.299 [2024-10-01 13:33:47.980112] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:56.299 [2024-10-01 13:33:47.980360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60328 ] 00:05:56.299 [2024-10-01 13:33:48.115388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.559 [2024-10-01 13:33:48.166140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.559 [2024-10-01 13:33:48.193764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.560 [2024-10-01 13:33:48.211509] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:56.560 [2024-10-01 13:33:48.211584] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:56.560 [2024-10-01 13:33:48.211615] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.560 [2024-10-01 13:33:48.270571] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:56.560 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:05:56.560 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.560 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:05:56.560 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:05:56.560 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:05:56.560 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.560 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:05:56.560 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:56.560 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:56.560 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:56.560 [2024-10-01 13:33:48.414882] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:56.560 [2024-10-01 13:33:48.415005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60330 ] 00:05:56.819 [2024-10-01 13:33:48.551751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.819 [2024-10-01 13:33:48.601200] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.819 [2024-10-01 13:33:48.629506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.079  Copying: 512/512 [B] (average 500 kBps) 00:05:57.079 00:05:57.079 ************************************ 00:05:57.079 END TEST dd_flag_nofollow_forced_aio 00:05:57.079 ************************************ 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 1ofylgvaxut9w0b6pmb3x7vytg37noejyszz0b89mi37gtjxyylj4m57nzxk20mqcsfe0fq27luiywzx8phq17h5mx4xj60nvwn7djyplu8u4jwc5jjgidz9u7hq14ohoajv6nnn55gaqop6fksytguf5vb3rn41q59llad3hsw3w35p098czfrvy5uhdco5lppol5ck7uasp7bvkoperzfigfbgcgbc2xaf074lcaapzzvloetgrg6oxo1vul1jv21g9ai692audy2377p53vsln47xclhwm0rgn6hhgbj1hk1xmbaz9iml8lqki3z8bh8b4mfit8eq596eakfwunef930p3od5g779qpcnmht3xs1vn4c44sbl2rt25sxsuxk8opyilu0qjhtluwlhizm0y8tmf5hxsw19chr5tyq0sxr9xw7z0vn5r1gbc1lt6z6tc288y5gszau0izihfote9skz1xbyuvn0qdke81hafcie3g1a6mih86t6ly46 == \1\o\f\y\l\g\v\a\x\u\t\9\w\0\b\6\p\m\b\3\x\7\v\y\t\g\3\7\n\o\e\j\y\s\z\z\0\b\8\9\m\i\3\7\g\t\j\x\y\y\l\j\4\m\5\7\n\z\x\k\2\0\m\q\c\s\f\e\0\f\q\2\7\l\u\i\y\w\z\x\8\p\h\q\1\7\h\5\m\x\4\x\j\6\0\n\v\w\n\7\d\j\y\p\l\u\8\u\4\j\w\c\5\j\j\g\i\d\z\9\u\7\h\q\1\4\o\h\o\a\j\v\6\n\n\n\5\5\g\a\q\o\p\6\f\k\s\y\t\g\u\f\5\v\b\3\r\n\4\1\q\5\9\l\l\a\d\3\h\s\w\3\w\3\5\p\0\9\8\c\z\f\r\v\y\5\u\h\d\c\o\5\l\p\p\o\l\5\c\k\7\u\a\s\p\7\b\v\k\o\p\e\r\z\f\i\g\f\b\g\c\g\b\c\2\x\a\f\0\7\4\l\c\a\a\p\z\z\v\l\o\e\t\g\r\g\6\o\x\o\1\v\u\l\1\j\v\2\1\g\9\a\i\6\9\2\a\u\d\y\2\3\7\7\p\5\3\v\s\l\n\4\7\x\c\l\h\w\m\0\r\g\n\6\h\h\g\b\j\1\h\k\1\x\m\b\a\z\9\i\m\l\8\l\q\k\i\3\z\8\b\h\8\b\4\m\f\i\t\8\e\q\5\9\6\e\a\k\f\w\u\n\e\f\9\3\0\p\3\o\d\5\g\7\7\9\q\p\c\n\m\h\t\3\x\s\1\v\n\4\c\4\4\s\b\l\2\r\t\2\5\s\x\s\u\x\k\8\o\p\y\i\l\u\0\q\j\h\t\l\u\w\l\h\i\z\m\0\y\8\t\m\f\5\h\x\s\w\1\9\c\h\r\5\t\y\q\0\s\x\r\9\x\w\7\z\0\v\n\5\r\1\g\b\c\1\l\t\6\z\6\t\c\2\8\8\y\5\g\s\z\a\u\0\i\z\i\h\f\o\t\e\9\s\k\z\1\x\b\y\u\v\n\0\q\d\k\e\8\1\h\a\f\c\i\e\3\g\1\a\6\m\i\h\8\6\t\6\l\y\4\6 ]] 00:05:57.079 00:05:57.079 real 0m1.355s 00:05:57.079 user 0m0.731s 00:05:57.079 sys 0m0.291s 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:57.079 ************************************ 00:05:57.079 START TEST dd_flag_noatime_forced_aio 00:05:57.079 ************************************ 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1727789628 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1727789628 00:05:57.079 13:33:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:05:58.455 13:33:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:58.455 [2024-10-01 13:33:49.971060] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:58.456 [2024-10-01 13:33:49.971151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60376 ] 00:05:58.456 [2024-10-01 13:33:50.111211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.456 [2024-10-01 13:33:50.180655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.456 [2024-10-01 13:33:50.214226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.714  Copying: 512/512 [B] (average 500 kBps) 00:05:58.714 00:05:58.714 13:33:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:58.714 13:33:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1727789628 )) 00:05:58.714 13:33:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:58.714 13:33:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1727789628 )) 00:05:58.714 13:33:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:58.714 [2024-10-01 13:33:50.479489] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:58.714 [2024-10-01 13:33:50.479765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60382 ] 00:05:58.973 [2024-10-01 13:33:50.614981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.974 [2024-10-01 13:33:50.665447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.974 [2024-10-01 13:33:50.693464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.233  Copying: 512/512 [B] (average 500 kBps) 00:05:59.233 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:59.233 ************************************ 00:05:59.233 END TEST dd_flag_noatime_forced_aio 00:05:59.233 ************************************ 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1727789630 )) 00:05:59.233 00:05:59.233 real 0m1.993s 00:05:59.233 user 0m0.534s 00:05:59.233 sys 0m0.216s 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:59.233 ************************************ 00:05:59.233 START TEST dd_flags_misc_forced_aio 00:05:59.233 ************************************ 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:59.233 13:33:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:59.233 [2024-10-01 13:33:51.003660] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:59.233 [2024-10-01 13:33:51.003822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60414 ] 00:05:59.492 [2024-10-01 13:33:51.140889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.492 [2024-10-01 13:33:51.194208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.492 [2024-10-01 13:33:51.221408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.752  Copying: 512/512 [B] (average 500 kBps) 00:05:59.752 00:05:59.752 13:33:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ sbz4s8t58474sxvlv8rlxiv2mhn0wuu65jui4kfun8bpungwsp54q8mmnl9g3ct7wd53l0cs92krjo3v9lvp7af8c66yy59xeuu4z1udimnyilzcdc0uozgoaxwys5ogjeuwzeakipmdwynsmzfw3v7l7jbycztzgwyn0py33no76gbk07s3ahqpyswaajnug0jbn2n0ntsiz9caqfu5fewiy6xmo128yvl7mhfrjs8mnm6dz2thne3lk6lgay0r0m8rt25vmgmrnx6np3awybpjrbosdykz63h6bce0i5nh4dtr6cukz1673271bhvs1hpmnsljv086aqjtgm9n5tjnmn9emd08fz8q1u9ukyhbv55ocgh78l9nb0jl5s7178rqzgpu20n0rpg19fdonacpg7g4qehq5qfmargoqh43q379r0is0409hgtkiftl7len2tsgpng1arj0eswpfvgz5qnrvaniodlthosxnpvqcf1hofs2ksg1fy93cezb == \s\b\z\4\s\8\t\5\8\4\7\4\s\x\v\l\v\8\r\l\x\i\v\2\m\h\n\0\w\u\u\6\5\j\u\i\4\k\f\u\n\8\b\p\u\n\g\w\s\p\5\4\q\8\m\m\n\l\9\g\3\c\t\7\w\d\5\3\l\0\c\s\9\2\k\r\j\o\3\v\9\l\v\p\7\a\f\8\c\6\6\y\y\5\9\x\e\u\u\4\z\1\u\d\i\m\n\y\i\l\z\c\d\c\0\u\o\z\g\o\a\x\w\y\s\5\o\g\j\e\u\w\z\e\a\k\i\p\m\d\w\y\n\s\m\z\f\w\3\v\7\l\7\j\b\y\c\z\t\z\g\w\y\n\0\p\y\3\3\n\o\7\6\g\b\k\0\7\s\3\a\h\q\p\y\s\w\a\a\j\n\u\g\0\j\b\n\2\n\0\n\t\s\i\z\9\c\a\q\f\u\5\f\e\w\i\y\6\x\m\o\1\2\8\y\v\l\7\m\h\f\r\j\s\8\m\n\m\6\d\z\2\t\h\n\e\3\l\k\6\l\g\a\y\0\r\0\m\8\r\t\2\5\v\m\g\m\r\n\x\6\n\p\3\a\w\y\b\p\j\r\b\o\s\d\y\k\z\6\3\h\6\b\c\e\0\i\5\n\h\4\d\t\r\6\c\u\k\z\1\6\7\3\2\7\1\b\h\v\s\1\h\p\m\n\s\l\j\v\0\8\6\a\q\j\t\g\m\9\n\5\t\j\n\m\n\9\e\m\d\0\8\f\z\8\q\1\u\9\u\k\y\h\b\v\5\5\o\c\g\h\7\8\l\9\n\b\0\j\l\5\s\7\1\7\8\r\q\z\g\p\u\2\0\n\0\r\p\g\1\9\f\d\o\n\a\c\p\g\7\g\4\q\e\h\q\5\q\f\m\a\r\g\o\q\h\4\3\q\3\7\9\r\0\i\s\0\4\0\9\h\g\t\k\i\f\t\l\7\l\e\n\2\t\s\g\p\n\g\1\a\r\j\0\e\s\w\p\f\v\g\z\5\q\n\r\v\a\n\i\o\d\l\t\h\o\s\x\n\p\v\q\c\f\1\h\o\f\s\2\k\s\g\1\f\y\9\3\c\e\z\b ]] 00:05:59.752 13:33:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:59.752 13:33:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:59.752 [2024-10-01 13:33:51.478364] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:05:59.752 [2024-10-01 13:33:51.478613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60418 ] 00:06:00.042 [2024-10-01 13:33:51.616432] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.042 [2024-10-01 13:33:51.671689] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.042 [2024-10-01 13:33:51.699272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.303  Copying: 512/512 [B] (average 500 kBps) 00:06:00.303 00:06:00.303 13:33:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ sbz4s8t58474sxvlv8rlxiv2mhn0wuu65jui4kfun8bpungwsp54q8mmnl9g3ct7wd53l0cs92krjo3v9lvp7af8c66yy59xeuu4z1udimnyilzcdc0uozgoaxwys5ogjeuwzeakipmdwynsmzfw3v7l7jbycztzgwyn0py33no76gbk07s3ahqpyswaajnug0jbn2n0ntsiz9caqfu5fewiy6xmo128yvl7mhfrjs8mnm6dz2thne3lk6lgay0r0m8rt25vmgmrnx6np3awybpjrbosdykz63h6bce0i5nh4dtr6cukz1673271bhvs1hpmnsljv086aqjtgm9n5tjnmn9emd08fz8q1u9ukyhbv55ocgh78l9nb0jl5s7178rqzgpu20n0rpg19fdonacpg7g4qehq5qfmargoqh43q379r0is0409hgtkiftl7len2tsgpng1arj0eswpfvgz5qnrvaniodlthosxnpvqcf1hofs2ksg1fy93cezb == \s\b\z\4\s\8\t\5\8\4\7\4\s\x\v\l\v\8\r\l\x\i\v\2\m\h\n\0\w\u\u\6\5\j\u\i\4\k\f\u\n\8\b\p\u\n\g\w\s\p\5\4\q\8\m\m\n\l\9\g\3\c\t\7\w\d\5\3\l\0\c\s\9\2\k\r\j\o\3\v\9\l\v\p\7\a\f\8\c\6\6\y\y\5\9\x\e\u\u\4\z\1\u\d\i\m\n\y\i\l\z\c\d\c\0\u\o\z\g\o\a\x\w\y\s\5\o\g\j\e\u\w\z\e\a\k\i\p\m\d\w\y\n\s\m\z\f\w\3\v\7\l\7\j\b\y\c\z\t\z\g\w\y\n\0\p\y\3\3\n\o\7\6\g\b\k\0\7\s\3\a\h\q\p\y\s\w\a\a\j\n\u\g\0\j\b\n\2\n\0\n\t\s\i\z\9\c\a\q\f\u\5\f\e\w\i\y\6\x\m\o\1\2\8\y\v\l\7\m\h\f\r\j\s\8\m\n\m\6\d\z\2\t\h\n\e\3\l\k\6\l\g\a\y\0\r\0\m\8\r\t\2\5\v\m\g\m\r\n\x\6\n\p\3\a\w\y\b\p\j\r\b\o\s\d\y\k\z\6\3\h\6\b\c\e\0\i\5\n\h\4\d\t\r\6\c\u\k\z\1\6\7\3\2\7\1\b\h\v\s\1\h\p\m\n\s\l\j\v\0\8\6\a\q\j\t\g\m\9\n\5\t\j\n\m\n\9\e\m\d\0\8\f\z\8\q\1\u\9\u\k\y\h\b\v\5\5\o\c\g\h\7\8\l\9\n\b\0\j\l\5\s\7\1\7\8\r\q\z\g\p\u\2\0\n\0\r\p\g\1\9\f\d\o\n\a\c\p\g\7\g\4\q\e\h\q\5\q\f\m\a\r\g\o\q\h\4\3\q\3\7\9\r\0\i\s\0\4\0\9\h\g\t\k\i\f\t\l\7\l\e\n\2\t\s\g\p\n\g\1\a\r\j\0\e\s\w\p\f\v\g\z\5\q\n\r\v\a\n\i\o\d\l\t\h\o\s\x\n\p\v\q\c\f\1\h\o\f\s\2\k\s\g\1\f\y\9\3\c\e\z\b ]] 00:06:00.303 13:33:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:00.303 13:33:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:00.303 [2024-10-01 13:33:51.944264] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:00.303 [2024-10-01 13:33:51.944353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60431 ] 00:06:00.303 [2024-10-01 13:33:52.081587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.303 [2024-10-01 13:33:52.133049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.303 [2024-10-01 13:33:52.160030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.562  Copying: 512/512 [B] (average 250 kBps) 00:06:00.562 00:06:00.562 13:33:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ sbz4s8t58474sxvlv8rlxiv2mhn0wuu65jui4kfun8bpungwsp54q8mmnl9g3ct7wd53l0cs92krjo3v9lvp7af8c66yy59xeuu4z1udimnyilzcdc0uozgoaxwys5ogjeuwzeakipmdwynsmzfw3v7l7jbycztzgwyn0py33no76gbk07s3ahqpyswaajnug0jbn2n0ntsiz9caqfu5fewiy6xmo128yvl7mhfrjs8mnm6dz2thne3lk6lgay0r0m8rt25vmgmrnx6np3awybpjrbosdykz63h6bce0i5nh4dtr6cukz1673271bhvs1hpmnsljv086aqjtgm9n5tjnmn9emd08fz8q1u9ukyhbv55ocgh78l9nb0jl5s7178rqzgpu20n0rpg19fdonacpg7g4qehq5qfmargoqh43q379r0is0409hgtkiftl7len2tsgpng1arj0eswpfvgz5qnrvaniodlthosxnpvqcf1hofs2ksg1fy93cezb == \s\b\z\4\s\8\t\5\8\4\7\4\s\x\v\l\v\8\r\l\x\i\v\2\m\h\n\0\w\u\u\6\5\j\u\i\4\k\f\u\n\8\b\p\u\n\g\w\s\p\5\4\q\8\m\m\n\l\9\g\3\c\t\7\w\d\5\3\l\0\c\s\9\2\k\r\j\o\3\v\9\l\v\p\7\a\f\8\c\6\6\y\y\5\9\x\e\u\u\4\z\1\u\d\i\m\n\y\i\l\z\c\d\c\0\u\o\z\g\o\a\x\w\y\s\5\o\g\j\e\u\w\z\e\a\k\i\p\m\d\w\y\n\s\m\z\f\w\3\v\7\l\7\j\b\y\c\z\t\z\g\w\y\n\0\p\y\3\3\n\o\7\6\g\b\k\0\7\s\3\a\h\q\p\y\s\w\a\a\j\n\u\g\0\j\b\n\2\n\0\n\t\s\i\z\9\c\a\q\f\u\5\f\e\w\i\y\6\x\m\o\1\2\8\y\v\l\7\m\h\f\r\j\s\8\m\n\m\6\d\z\2\t\h\n\e\3\l\k\6\l\g\a\y\0\r\0\m\8\r\t\2\5\v\m\g\m\r\n\x\6\n\p\3\a\w\y\b\p\j\r\b\o\s\d\y\k\z\6\3\h\6\b\c\e\0\i\5\n\h\4\d\t\r\6\c\u\k\z\1\6\7\3\2\7\1\b\h\v\s\1\h\p\m\n\s\l\j\v\0\8\6\a\q\j\t\g\m\9\n\5\t\j\n\m\n\9\e\m\d\0\8\f\z\8\q\1\u\9\u\k\y\h\b\v\5\5\o\c\g\h\7\8\l\9\n\b\0\j\l\5\s\7\1\7\8\r\q\z\g\p\u\2\0\n\0\r\p\g\1\9\f\d\o\n\a\c\p\g\7\g\4\q\e\h\q\5\q\f\m\a\r\g\o\q\h\4\3\q\3\7\9\r\0\i\s\0\4\0\9\h\g\t\k\i\f\t\l\7\l\e\n\2\t\s\g\p\n\g\1\a\r\j\0\e\s\w\p\f\v\g\z\5\q\n\r\v\a\n\i\o\d\l\t\h\o\s\x\n\p\v\q\c\f\1\h\o\f\s\2\k\s\g\1\f\y\9\3\c\e\z\b ]] 00:06:00.562 13:33:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:00.562 13:33:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:00.562 [2024-10-01 13:33:52.411098] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:00.562 [2024-10-01 13:33:52.411371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60433 ] 00:06:00.822 [2024-10-01 13:33:52.548086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.822 [2024-10-01 13:33:52.600208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.822 [2024-10-01 13:33:52.629250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.081  Copying: 512/512 [B] (average 250 kBps) 00:06:01.081 00:06:01.081 13:33:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ sbz4s8t58474sxvlv8rlxiv2mhn0wuu65jui4kfun8bpungwsp54q8mmnl9g3ct7wd53l0cs92krjo3v9lvp7af8c66yy59xeuu4z1udimnyilzcdc0uozgoaxwys5ogjeuwzeakipmdwynsmzfw3v7l7jbycztzgwyn0py33no76gbk07s3ahqpyswaajnug0jbn2n0ntsiz9caqfu5fewiy6xmo128yvl7mhfrjs8mnm6dz2thne3lk6lgay0r0m8rt25vmgmrnx6np3awybpjrbosdykz63h6bce0i5nh4dtr6cukz1673271bhvs1hpmnsljv086aqjtgm9n5tjnmn9emd08fz8q1u9ukyhbv55ocgh78l9nb0jl5s7178rqzgpu20n0rpg19fdonacpg7g4qehq5qfmargoqh43q379r0is0409hgtkiftl7len2tsgpng1arj0eswpfvgz5qnrvaniodlthosxnpvqcf1hofs2ksg1fy93cezb == \s\b\z\4\s\8\t\5\8\4\7\4\s\x\v\l\v\8\r\l\x\i\v\2\m\h\n\0\w\u\u\6\5\j\u\i\4\k\f\u\n\8\b\p\u\n\g\w\s\p\5\4\q\8\m\m\n\l\9\g\3\c\t\7\w\d\5\3\l\0\c\s\9\2\k\r\j\o\3\v\9\l\v\p\7\a\f\8\c\6\6\y\y\5\9\x\e\u\u\4\z\1\u\d\i\m\n\y\i\l\z\c\d\c\0\u\o\z\g\o\a\x\w\y\s\5\o\g\j\e\u\w\z\e\a\k\i\p\m\d\w\y\n\s\m\z\f\w\3\v\7\l\7\j\b\y\c\z\t\z\g\w\y\n\0\p\y\3\3\n\o\7\6\g\b\k\0\7\s\3\a\h\q\p\y\s\w\a\a\j\n\u\g\0\j\b\n\2\n\0\n\t\s\i\z\9\c\a\q\f\u\5\f\e\w\i\y\6\x\m\o\1\2\8\y\v\l\7\m\h\f\r\j\s\8\m\n\m\6\d\z\2\t\h\n\e\3\l\k\6\l\g\a\y\0\r\0\m\8\r\t\2\5\v\m\g\m\r\n\x\6\n\p\3\a\w\y\b\p\j\r\b\o\s\d\y\k\z\6\3\h\6\b\c\e\0\i\5\n\h\4\d\t\r\6\c\u\k\z\1\6\7\3\2\7\1\b\h\v\s\1\h\p\m\n\s\l\j\v\0\8\6\a\q\j\t\g\m\9\n\5\t\j\n\m\n\9\e\m\d\0\8\f\z\8\q\1\u\9\u\k\y\h\b\v\5\5\o\c\g\h\7\8\l\9\n\b\0\j\l\5\s\7\1\7\8\r\q\z\g\p\u\2\0\n\0\r\p\g\1\9\f\d\o\n\a\c\p\g\7\g\4\q\e\h\q\5\q\f\m\a\r\g\o\q\h\4\3\q\3\7\9\r\0\i\s\0\4\0\9\h\g\t\k\i\f\t\l\7\l\e\n\2\t\s\g\p\n\g\1\a\r\j\0\e\s\w\p\f\v\g\z\5\q\n\r\v\a\n\i\o\d\l\t\h\o\s\x\n\p\v\q\c\f\1\h\o\f\s\2\k\s\g\1\f\y\9\3\c\e\z\b ]] 00:06:01.081 13:33:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:01.081 13:33:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:01.081 13:33:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:01.081 13:33:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:01.081 13:33:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:01.081 13:33:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:01.081 [2024-10-01 13:33:52.887157] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:01.081 [2024-10-01 13:33:52.887239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60445 ] 00:06:01.340 [2024-10-01 13:33:53.013833] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.340 [2024-10-01 13:33:53.063414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.340 [2024-10-01 13:33:53.092656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.599  Copying: 512/512 [B] (average 500 kBps) 00:06:01.599 00:06:01.599 13:33:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lwajbqja7z5vajrtw3kdb3dl7emf4uez94pbxufok6ylee3yuz55j6pufvrmlht7xfuourq4dtdfeld90ssxp6ugtbhdji00e7ma3ntvwo620h1jqiox91b78ziezoi14ju8ft98jqultwt8kf9djd0d76pzc46xya1ffjc4uxsiy37esaojaxpbtfnae2pmlhoa3yxsajfu1prvgj01nc1foa6kc5zf865bvzy0dyo0ss43naxowojmz4xpz3ixbm87w0yn7r4j66q3imw5ith1k6ek9ufsvvha2hxjj3ytsl7w1h9qrfvet6lfeazm8jhea9njaxasdagudtawdyrcgtr8mv9rrpg9dpmyh4svuodjph36ba9t2c1a6u340fn3jajd98ojonjyrwvuble1wkqfq0nlxdtwnwa13769tj9zrrxhfh2oamskxx37zkubkalbbgp0hly6yfjlsckfnapesue2kb85jjdz2klxtoy5mg1ghwdhmk0q6j94 == \l\w\a\j\b\q\j\a\7\z\5\v\a\j\r\t\w\3\k\d\b\3\d\l\7\e\m\f\4\u\e\z\9\4\p\b\x\u\f\o\k\6\y\l\e\e\3\y\u\z\5\5\j\6\p\u\f\v\r\m\l\h\t\7\x\f\u\o\u\r\q\4\d\t\d\f\e\l\d\9\0\s\s\x\p\6\u\g\t\b\h\d\j\i\0\0\e\7\m\a\3\n\t\v\w\o\6\2\0\h\1\j\q\i\o\x\9\1\b\7\8\z\i\e\z\o\i\1\4\j\u\8\f\t\9\8\j\q\u\l\t\w\t\8\k\f\9\d\j\d\0\d\7\6\p\z\c\4\6\x\y\a\1\f\f\j\c\4\u\x\s\i\y\3\7\e\s\a\o\j\a\x\p\b\t\f\n\a\e\2\p\m\l\h\o\a\3\y\x\s\a\j\f\u\1\p\r\v\g\j\0\1\n\c\1\f\o\a\6\k\c\5\z\f\8\6\5\b\v\z\y\0\d\y\o\0\s\s\4\3\n\a\x\o\w\o\j\m\z\4\x\p\z\3\i\x\b\m\8\7\w\0\y\n\7\r\4\j\6\6\q\3\i\m\w\5\i\t\h\1\k\6\e\k\9\u\f\s\v\v\h\a\2\h\x\j\j\3\y\t\s\l\7\w\1\h\9\q\r\f\v\e\t\6\l\f\e\a\z\m\8\j\h\e\a\9\n\j\a\x\a\s\d\a\g\u\d\t\a\w\d\y\r\c\g\t\r\8\m\v\9\r\r\p\g\9\d\p\m\y\h\4\s\v\u\o\d\j\p\h\3\6\b\a\9\t\2\c\1\a\6\u\3\4\0\f\n\3\j\a\j\d\9\8\o\j\o\n\j\y\r\w\v\u\b\l\e\1\w\k\q\f\q\0\n\l\x\d\t\w\n\w\a\1\3\7\6\9\t\j\9\z\r\r\x\h\f\h\2\o\a\m\s\k\x\x\3\7\z\k\u\b\k\a\l\b\b\g\p\0\h\l\y\6\y\f\j\l\s\c\k\f\n\a\p\e\s\u\e\2\k\b\8\5\j\j\d\z\2\k\l\x\t\o\y\5\m\g\1\g\h\w\d\h\m\k\0\q\6\j\9\4 ]] 00:06:01.599 13:33:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:01.599 13:33:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:01.599 [2024-10-01 13:33:53.326553] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:01.599 [2024-10-01 13:33:53.326637] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60448 ] 00:06:01.859 [2024-10-01 13:33:53.462026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.859 [2024-10-01 13:33:53.513087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.859 [2024-10-01 13:33:53.545067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.119  Copying: 512/512 [B] (average 500 kBps) 00:06:02.119 00:06:02.119 13:33:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lwajbqja7z5vajrtw3kdb3dl7emf4uez94pbxufok6ylee3yuz55j6pufvrmlht7xfuourq4dtdfeld90ssxp6ugtbhdji00e7ma3ntvwo620h1jqiox91b78ziezoi14ju8ft98jqultwt8kf9djd0d76pzc46xya1ffjc4uxsiy37esaojaxpbtfnae2pmlhoa3yxsajfu1prvgj01nc1foa6kc5zf865bvzy0dyo0ss43naxowojmz4xpz3ixbm87w0yn7r4j66q3imw5ith1k6ek9ufsvvha2hxjj3ytsl7w1h9qrfvet6lfeazm8jhea9njaxasdagudtawdyrcgtr8mv9rrpg9dpmyh4svuodjph36ba9t2c1a6u340fn3jajd98ojonjyrwvuble1wkqfq0nlxdtwnwa13769tj9zrrxhfh2oamskxx37zkubkalbbgp0hly6yfjlsckfnapesue2kb85jjdz2klxtoy5mg1ghwdhmk0q6j94 == \l\w\a\j\b\q\j\a\7\z\5\v\a\j\r\t\w\3\k\d\b\3\d\l\7\e\m\f\4\u\e\z\9\4\p\b\x\u\f\o\k\6\y\l\e\e\3\y\u\z\5\5\j\6\p\u\f\v\r\m\l\h\t\7\x\f\u\o\u\r\q\4\d\t\d\f\e\l\d\9\0\s\s\x\p\6\u\g\t\b\h\d\j\i\0\0\e\7\m\a\3\n\t\v\w\o\6\2\0\h\1\j\q\i\o\x\9\1\b\7\8\z\i\e\z\o\i\1\4\j\u\8\f\t\9\8\j\q\u\l\t\w\t\8\k\f\9\d\j\d\0\d\7\6\p\z\c\4\6\x\y\a\1\f\f\j\c\4\u\x\s\i\y\3\7\e\s\a\o\j\a\x\p\b\t\f\n\a\e\2\p\m\l\h\o\a\3\y\x\s\a\j\f\u\1\p\r\v\g\j\0\1\n\c\1\f\o\a\6\k\c\5\z\f\8\6\5\b\v\z\y\0\d\y\o\0\s\s\4\3\n\a\x\o\w\o\j\m\z\4\x\p\z\3\i\x\b\m\8\7\w\0\y\n\7\r\4\j\6\6\q\3\i\m\w\5\i\t\h\1\k\6\e\k\9\u\f\s\v\v\h\a\2\h\x\j\j\3\y\t\s\l\7\w\1\h\9\q\r\f\v\e\t\6\l\f\e\a\z\m\8\j\h\e\a\9\n\j\a\x\a\s\d\a\g\u\d\t\a\w\d\y\r\c\g\t\r\8\m\v\9\r\r\p\g\9\d\p\m\y\h\4\s\v\u\o\d\j\p\h\3\6\b\a\9\t\2\c\1\a\6\u\3\4\0\f\n\3\j\a\j\d\9\8\o\j\o\n\j\y\r\w\v\u\b\l\e\1\w\k\q\f\q\0\n\l\x\d\t\w\n\w\a\1\3\7\6\9\t\j\9\z\r\r\x\h\f\h\2\o\a\m\s\k\x\x\3\7\z\k\u\b\k\a\l\b\b\g\p\0\h\l\y\6\y\f\j\l\s\c\k\f\n\a\p\e\s\u\e\2\k\b\8\5\j\j\d\z\2\k\l\x\t\o\y\5\m\g\1\g\h\w\d\h\m\k\0\q\6\j\9\4 ]] 00:06:02.119 13:33:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:02.119 13:33:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:02.119 [2024-10-01 13:33:53.791687] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:02.119 [2024-10-01 13:33:53.791787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60456 ] 00:06:02.119 [2024-10-01 13:33:53.928560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.119 [2024-10-01 13:33:53.979254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.379 [2024-10-01 13:33:54.006880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.379  Copying: 512/512 [B] (average 250 kBps) 00:06:02.379 00:06:02.379 13:33:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lwajbqja7z5vajrtw3kdb3dl7emf4uez94pbxufok6ylee3yuz55j6pufvrmlht7xfuourq4dtdfeld90ssxp6ugtbhdji00e7ma3ntvwo620h1jqiox91b78ziezoi14ju8ft98jqultwt8kf9djd0d76pzc46xya1ffjc4uxsiy37esaojaxpbtfnae2pmlhoa3yxsajfu1prvgj01nc1foa6kc5zf865bvzy0dyo0ss43naxowojmz4xpz3ixbm87w0yn7r4j66q3imw5ith1k6ek9ufsvvha2hxjj3ytsl7w1h9qrfvet6lfeazm8jhea9njaxasdagudtawdyrcgtr8mv9rrpg9dpmyh4svuodjph36ba9t2c1a6u340fn3jajd98ojonjyrwvuble1wkqfq0nlxdtwnwa13769tj9zrrxhfh2oamskxx37zkubkalbbgp0hly6yfjlsckfnapesue2kb85jjdz2klxtoy5mg1ghwdhmk0q6j94 == \l\w\a\j\b\q\j\a\7\z\5\v\a\j\r\t\w\3\k\d\b\3\d\l\7\e\m\f\4\u\e\z\9\4\p\b\x\u\f\o\k\6\y\l\e\e\3\y\u\z\5\5\j\6\p\u\f\v\r\m\l\h\t\7\x\f\u\o\u\r\q\4\d\t\d\f\e\l\d\9\0\s\s\x\p\6\u\g\t\b\h\d\j\i\0\0\e\7\m\a\3\n\t\v\w\o\6\2\0\h\1\j\q\i\o\x\9\1\b\7\8\z\i\e\z\o\i\1\4\j\u\8\f\t\9\8\j\q\u\l\t\w\t\8\k\f\9\d\j\d\0\d\7\6\p\z\c\4\6\x\y\a\1\f\f\j\c\4\u\x\s\i\y\3\7\e\s\a\o\j\a\x\p\b\t\f\n\a\e\2\p\m\l\h\o\a\3\y\x\s\a\j\f\u\1\p\r\v\g\j\0\1\n\c\1\f\o\a\6\k\c\5\z\f\8\6\5\b\v\z\y\0\d\y\o\0\s\s\4\3\n\a\x\o\w\o\j\m\z\4\x\p\z\3\i\x\b\m\8\7\w\0\y\n\7\r\4\j\6\6\q\3\i\m\w\5\i\t\h\1\k\6\e\k\9\u\f\s\v\v\h\a\2\h\x\j\j\3\y\t\s\l\7\w\1\h\9\q\r\f\v\e\t\6\l\f\e\a\z\m\8\j\h\e\a\9\n\j\a\x\a\s\d\a\g\u\d\t\a\w\d\y\r\c\g\t\r\8\m\v\9\r\r\p\g\9\d\p\m\y\h\4\s\v\u\o\d\j\p\h\3\6\b\a\9\t\2\c\1\a\6\u\3\4\0\f\n\3\j\a\j\d\9\8\o\j\o\n\j\y\r\w\v\u\b\l\e\1\w\k\q\f\q\0\n\l\x\d\t\w\n\w\a\1\3\7\6\9\t\j\9\z\r\r\x\h\f\h\2\o\a\m\s\k\x\x\3\7\z\k\u\b\k\a\l\b\b\g\p\0\h\l\y\6\y\f\j\l\s\c\k\f\n\a\p\e\s\u\e\2\k\b\8\5\j\j\d\z\2\k\l\x\t\o\y\5\m\g\1\g\h\w\d\h\m\k\0\q\6\j\9\4 ]] 00:06:02.379 13:33:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:02.379 13:33:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:02.638 [2024-10-01 13:33:54.254306] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:02.638 [2024-10-01 13:33:54.254562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60463 ] 00:06:02.638 [2024-10-01 13:33:54.390775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.638 [2024-10-01 13:33:54.439610] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.638 [2024-10-01 13:33:54.468745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.897  Copying: 512/512 [B] (average 250 kBps) 00:06:02.897 00:06:02.897 ************************************ 00:06:02.897 END TEST dd_flags_misc_forced_aio 00:06:02.897 ************************************ 00:06:02.898 13:33:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lwajbqja7z5vajrtw3kdb3dl7emf4uez94pbxufok6ylee3yuz55j6pufvrmlht7xfuourq4dtdfeld90ssxp6ugtbhdji00e7ma3ntvwo620h1jqiox91b78ziezoi14ju8ft98jqultwt8kf9djd0d76pzc46xya1ffjc4uxsiy37esaojaxpbtfnae2pmlhoa3yxsajfu1prvgj01nc1foa6kc5zf865bvzy0dyo0ss43naxowojmz4xpz3ixbm87w0yn7r4j66q3imw5ith1k6ek9ufsvvha2hxjj3ytsl7w1h9qrfvet6lfeazm8jhea9njaxasdagudtawdyrcgtr8mv9rrpg9dpmyh4svuodjph36ba9t2c1a6u340fn3jajd98ojonjyrwvuble1wkqfq0nlxdtwnwa13769tj9zrrxhfh2oamskxx37zkubkalbbgp0hly6yfjlsckfnapesue2kb85jjdz2klxtoy5mg1ghwdhmk0q6j94 == \l\w\a\j\b\q\j\a\7\z\5\v\a\j\r\t\w\3\k\d\b\3\d\l\7\e\m\f\4\u\e\z\9\4\p\b\x\u\f\o\k\6\y\l\e\e\3\y\u\z\5\5\j\6\p\u\f\v\r\m\l\h\t\7\x\f\u\o\u\r\q\4\d\t\d\f\e\l\d\9\0\s\s\x\p\6\u\g\t\b\h\d\j\i\0\0\e\7\m\a\3\n\t\v\w\o\6\2\0\h\1\j\q\i\o\x\9\1\b\7\8\z\i\e\z\o\i\1\4\j\u\8\f\t\9\8\j\q\u\l\t\w\t\8\k\f\9\d\j\d\0\d\7\6\p\z\c\4\6\x\y\a\1\f\f\j\c\4\u\x\s\i\y\3\7\e\s\a\o\j\a\x\p\b\t\f\n\a\e\2\p\m\l\h\o\a\3\y\x\s\a\j\f\u\1\p\r\v\g\j\0\1\n\c\1\f\o\a\6\k\c\5\z\f\8\6\5\b\v\z\y\0\d\y\o\0\s\s\4\3\n\a\x\o\w\o\j\m\z\4\x\p\z\3\i\x\b\m\8\7\w\0\y\n\7\r\4\j\6\6\q\3\i\m\w\5\i\t\h\1\k\6\e\k\9\u\f\s\v\v\h\a\2\h\x\j\j\3\y\t\s\l\7\w\1\h\9\q\r\f\v\e\t\6\l\f\e\a\z\m\8\j\h\e\a\9\n\j\a\x\a\s\d\a\g\u\d\t\a\w\d\y\r\c\g\t\r\8\m\v\9\r\r\p\g\9\d\p\m\y\h\4\s\v\u\o\d\j\p\h\3\6\b\a\9\t\2\c\1\a\6\u\3\4\0\f\n\3\j\a\j\d\9\8\o\j\o\n\j\y\r\w\v\u\b\l\e\1\w\k\q\f\q\0\n\l\x\d\t\w\n\w\a\1\3\7\6\9\t\j\9\z\r\r\x\h\f\h\2\o\a\m\s\k\x\x\3\7\z\k\u\b\k\a\l\b\b\g\p\0\h\l\y\6\y\f\j\l\s\c\k\f\n\a\p\e\s\u\e\2\k\b\8\5\j\j\d\z\2\k\l\x\t\o\y\5\m\g\1\g\h\w\d\h\m\k\0\q\6\j\9\4 ]] 00:06:02.898 00:06:02.898 real 0m3.725s 00:06:02.898 user 0m1.982s 00:06:02.898 sys 0m0.752s 00:06:02.898 13:33:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.898 13:33:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:02.898 13:33:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:02.898 13:33:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:02.898 13:33:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:02.898 ************************************ 00:06:02.898 END TEST spdk_dd_posix 00:06:02.898 ************************************ 00:06:02.898 00:06:02.898 real 0m17.156s 00:06:02.898 user 0m7.957s 00:06:02.898 sys 0m4.513s 00:06:02.898 13:33:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.898 13:33:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:03.158 13:33:54 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:03.158 13:33:54 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.158 13:33:54 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.158 13:33:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:03.158 ************************************ 00:06:03.158 START TEST spdk_dd_malloc 00:06:03.158 ************************************ 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:03.158 * Looking for test storage... 00:06:03.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:03.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.158 --rc genhtml_branch_coverage=1 00:06:03.158 --rc genhtml_function_coverage=1 00:06:03.158 --rc genhtml_legend=1 00:06:03.158 --rc geninfo_all_blocks=1 00:06:03.158 --rc geninfo_unexecuted_blocks=1 00:06:03.158 00:06:03.158 ' 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:03.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.158 --rc genhtml_branch_coverage=1 00:06:03.158 --rc genhtml_function_coverage=1 00:06:03.158 --rc genhtml_legend=1 00:06:03.158 --rc geninfo_all_blocks=1 00:06:03.158 --rc geninfo_unexecuted_blocks=1 00:06:03.158 00:06:03.158 ' 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:03.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.158 --rc genhtml_branch_coverage=1 00:06:03.158 --rc genhtml_function_coverage=1 00:06:03.158 --rc genhtml_legend=1 00:06:03.158 --rc geninfo_all_blocks=1 00:06:03.158 --rc geninfo_unexecuted_blocks=1 00:06:03.158 00:06:03.158 ' 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:03.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.158 --rc genhtml_branch_coverage=1 00:06:03.158 --rc genhtml_function_coverage=1 00:06:03.158 --rc genhtml_legend=1 00:06:03.158 --rc geninfo_all_blocks=1 00:06:03.158 --rc geninfo_unexecuted_blocks=1 00:06:03.158 00:06:03.158 ' 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:03.158 ************************************ 00:06:03.158 START TEST dd_malloc_copy 00:06:03.158 ************************************ 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:03.158 13:33:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:03.159 13:33:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:03.159 13:33:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:03.159 13:33:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:03.418 [2024-10-01 13:33:55.036318] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:03.418 [2024-10-01 13:33:55.036576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60545 ] 00:06:03.418 { 00:06:03.418 "subsystems": [ 00:06:03.418 { 00:06:03.418 "subsystem": "bdev", 00:06:03.418 "config": [ 00:06:03.418 { 00:06:03.418 "params": { 00:06:03.418 "block_size": 512, 00:06:03.418 "num_blocks": 1048576, 00:06:03.418 "name": "malloc0" 00:06:03.418 }, 00:06:03.418 "method": "bdev_malloc_create" 00:06:03.418 }, 00:06:03.418 { 00:06:03.418 "params": { 00:06:03.418 "block_size": 512, 00:06:03.418 "num_blocks": 1048576, 00:06:03.418 "name": "malloc1" 00:06:03.418 }, 00:06:03.418 "method": "bdev_malloc_create" 00:06:03.418 }, 00:06:03.418 { 00:06:03.418 "method": "bdev_wait_for_examine" 00:06:03.418 } 00:06:03.418 ] 00:06:03.418 } 00:06:03.418 ] 00:06:03.418 } 00:06:03.418 [2024-10-01 13:33:55.172810] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.418 [2024-10-01 13:33:55.243383] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.418 [2024-10-01 13:33:55.277183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.247  Copying: 232/512 [MB] (232 MBps) Copying: 463/512 [MB] (230 MBps) Copying: 512/512 [MB] (average 231 MBps) 00:06:06.247 00:06:06.247 13:33:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:06.247 13:33:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:06.247 13:33:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:06.247 13:33:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:06.247 [2024-10-01 13:33:58.086867] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:06.247 [2024-10-01 13:33:58.086969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60582 ] 00:06:06.247 { 00:06:06.247 "subsystems": [ 00:06:06.247 { 00:06:06.247 "subsystem": "bdev", 00:06:06.247 "config": [ 00:06:06.247 { 00:06:06.247 "params": { 00:06:06.247 "block_size": 512, 00:06:06.248 "num_blocks": 1048576, 00:06:06.248 "name": "malloc0" 00:06:06.248 }, 00:06:06.248 "method": "bdev_malloc_create" 00:06:06.248 }, 00:06:06.248 { 00:06:06.248 "params": { 00:06:06.248 "block_size": 512, 00:06:06.248 "num_blocks": 1048576, 00:06:06.248 "name": "malloc1" 00:06:06.248 }, 00:06:06.248 "method": "bdev_malloc_create" 00:06:06.248 }, 00:06:06.248 { 00:06:06.248 "method": "bdev_wait_for_examine" 00:06:06.248 } 00:06:06.248 ] 00:06:06.248 } 00:06:06.248 ] 00:06:06.248 } 00:06:06.507 [2024-10-01 13:33:58.224950] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.507 [2024-10-01 13:33:58.279955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.507 [2024-10-01 13:33:58.309601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.339  Copying: 232/512 [MB] (232 MBps) Copying: 459/512 [MB] (227 MBps) Copying: 512/512 [MB] (average 230 MBps) 00:06:09.339 00:06:09.339 00:06:09.339 real 0m6.092s 00:06:09.339 user 0m5.448s 00:06:09.339 sys 0m0.495s 00:06:09.339 13:34:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.339 ************************************ 00:06:09.339 END TEST dd_malloc_copy 00:06:09.339 ************************************ 00:06:09.339 13:34:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:09.339 ************************************ 00:06:09.339 END TEST spdk_dd_malloc 00:06:09.339 ************************************ 00:06:09.339 00:06:09.339 real 0m6.348s 00:06:09.339 user 0m5.592s 00:06:09.339 sys 0m0.608s 00:06:09.339 13:34:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.339 13:34:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:09.340 13:34:01 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:09.340 13:34:01 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:09.340 13:34:01 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.340 13:34:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:09.340 ************************************ 00:06:09.340 START TEST spdk_dd_bdev_to_bdev 00:06:09.340 ************************************ 00:06:09.340 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:09.598 * Looking for test storage... 00:06:09.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:09.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.599 --rc genhtml_branch_coverage=1 00:06:09.599 --rc genhtml_function_coverage=1 00:06:09.599 --rc genhtml_legend=1 00:06:09.599 --rc geninfo_all_blocks=1 00:06:09.599 --rc geninfo_unexecuted_blocks=1 00:06:09.599 00:06:09.599 ' 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:09.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.599 --rc genhtml_branch_coverage=1 00:06:09.599 --rc genhtml_function_coverage=1 00:06:09.599 --rc genhtml_legend=1 00:06:09.599 --rc geninfo_all_blocks=1 00:06:09.599 --rc geninfo_unexecuted_blocks=1 00:06:09.599 00:06:09.599 ' 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:09.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.599 --rc genhtml_branch_coverage=1 00:06:09.599 --rc genhtml_function_coverage=1 00:06:09.599 --rc genhtml_legend=1 00:06:09.599 --rc geninfo_all_blocks=1 00:06:09.599 --rc geninfo_unexecuted_blocks=1 00:06:09.599 00:06:09.599 ' 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:09.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.599 --rc genhtml_branch_coverage=1 00:06:09.599 --rc genhtml_function_coverage=1 00:06:09.599 --rc genhtml_legend=1 00:06:09.599 --rc geninfo_all_blocks=1 00:06:09.599 --rc geninfo_unexecuted_blocks=1 00:06:09.599 00:06:09.599 ' 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:09.599 ************************************ 00:06:09.599 START TEST dd_inflate_file 00:06:09.599 ************************************ 00:06:09.599 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:09.599 [2024-10-01 13:34:01.408378] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:09.599 [2024-10-01 13:34:01.408632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60696 ] 00:06:09.858 [2024-10-01 13:34:01.538495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.858 [2024-10-01 13:34:01.590238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.858 [2024-10-01 13:34:01.619291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.117  Copying: 64/64 [MB] (average 1600 MBps) 00:06:10.117 00:06:10.117 00:06:10.117 real 0m0.460s 00:06:10.117 user 0m0.266s 00:06:10.117 sys 0m0.222s 00:06:10.117 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.117 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:10.117 ************************************ 00:06:10.117 END TEST dd_inflate_file 00:06:10.117 ************************************ 00:06:10.117 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:10.117 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:10.117 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:10.117 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:10.117 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:10.117 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:10.117 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:10.117 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.117 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:10.117 ************************************ 00:06:10.117 START TEST dd_copy_to_out_bdev 00:06:10.117 ************************************ 00:06:10.118 13:34:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:10.118 { 00:06:10.118 "subsystems": [ 00:06:10.118 { 00:06:10.118 "subsystem": "bdev", 00:06:10.118 "config": [ 00:06:10.118 { 00:06:10.118 "params": { 00:06:10.118 "trtype": "pcie", 00:06:10.118 "traddr": "0000:00:10.0", 00:06:10.118 "name": "Nvme0" 00:06:10.118 }, 00:06:10.118 "method": "bdev_nvme_attach_controller" 00:06:10.118 }, 00:06:10.118 { 00:06:10.118 "params": { 00:06:10.118 "trtype": "pcie", 00:06:10.118 "traddr": "0000:00:11.0", 00:06:10.118 "name": "Nvme1" 00:06:10.118 }, 00:06:10.118 "method": "bdev_nvme_attach_controller" 00:06:10.118 }, 00:06:10.118 { 00:06:10.118 "method": "bdev_wait_for_examine" 00:06:10.118 } 00:06:10.118 ] 00:06:10.118 } 00:06:10.118 ] 00:06:10.118 } 00:06:10.118 [2024-10-01 13:34:01.943764] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:10.118 [2024-10-01 13:34:01.943864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60730 ] 00:06:10.377 [2024-10-01 13:34:02.079787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.377 [2024-10-01 13:34:02.138505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.377 [2024-10-01 13:34:02.167432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.011  Copying: 52/64 [MB] (52 MBps) Copying: 64/64 [MB] (average 52 MBps) 00:06:12.011 00:06:12.011 00:06:12.011 real 0m1.811s 00:06:12.011 user 0m1.638s 00:06:12.011 sys 0m1.430s 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:12.011 ************************************ 00:06:12.011 END TEST dd_copy_to_out_bdev 00:06:12.011 ************************************ 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:12.011 ************************************ 00:06:12.011 START TEST dd_offset_magic 00:06:12.011 ************************************ 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:12.011 13:34:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:12.011 [2024-10-01 13:34:03.813198] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:12.011 [2024-10-01 13:34:03.813476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60769 ] 00:06:12.011 { 00:06:12.011 "subsystems": [ 00:06:12.011 { 00:06:12.011 "subsystem": "bdev", 00:06:12.011 "config": [ 00:06:12.011 { 00:06:12.011 "params": { 00:06:12.011 "trtype": "pcie", 00:06:12.011 "traddr": "0000:00:10.0", 00:06:12.011 "name": "Nvme0" 00:06:12.011 }, 00:06:12.011 "method": "bdev_nvme_attach_controller" 00:06:12.011 }, 00:06:12.011 { 00:06:12.011 "params": { 00:06:12.011 "trtype": "pcie", 00:06:12.011 "traddr": "0000:00:11.0", 00:06:12.011 "name": "Nvme1" 00:06:12.011 }, 00:06:12.011 "method": "bdev_nvme_attach_controller" 00:06:12.011 }, 00:06:12.011 { 00:06:12.011 "method": "bdev_wait_for_examine" 00:06:12.011 } 00:06:12.011 ] 00:06:12.011 } 00:06:12.011 ] 00:06:12.011 } 00:06:12.270 [2024-10-01 13:34:03.951781] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.270 [2024-10-01 13:34:04.000450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.270 [2024-10-01 13:34:04.028149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.788  Copying: 65/65 [MB] (average 955 MBps) 00:06:12.788 00:06:12.788 13:34:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:12.788 13:34:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:12.788 13:34:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:12.788 13:34:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:12.788 [2024-10-01 13:34:04.510842] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:12.788 [2024-10-01 13:34:04.510935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60789 ] 00:06:12.788 { 00:06:12.788 "subsystems": [ 00:06:12.788 { 00:06:12.788 "subsystem": "bdev", 00:06:12.788 "config": [ 00:06:12.788 { 00:06:12.788 "params": { 00:06:12.788 "trtype": "pcie", 00:06:12.788 "traddr": "0000:00:10.0", 00:06:12.788 "name": "Nvme0" 00:06:12.788 }, 00:06:12.788 "method": "bdev_nvme_attach_controller" 00:06:12.788 }, 00:06:12.788 { 00:06:12.788 "params": { 00:06:12.788 "trtype": "pcie", 00:06:12.788 "traddr": "0000:00:11.0", 00:06:12.788 "name": "Nvme1" 00:06:12.788 }, 00:06:12.788 "method": "bdev_nvme_attach_controller" 00:06:12.788 }, 00:06:12.788 { 00:06:12.788 "method": "bdev_wait_for_examine" 00:06:12.788 } 00:06:12.788 ] 00:06:12.788 } 00:06:12.788 ] 00:06:12.788 } 00:06:13.047 [2024-10-01 13:34:04.649289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.047 [2024-10-01 13:34:04.703597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.047 [2024-10-01 13:34:04.731519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.306  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:13.306 00:06:13.306 13:34:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:13.306 13:34:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:13.306 13:34:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:13.306 13:34:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:13.306 13:34:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:13.306 13:34:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:13.306 13:34:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:13.306 [2024-10-01 13:34:05.086394] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:13.306 [2024-10-01 13:34:05.086494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60806 ] 00:06:13.306 { 00:06:13.306 "subsystems": [ 00:06:13.306 { 00:06:13.306 "subsystem": "bdev", 00:06:13.306 "config": [ 00:06:13.306 { 00:06:13.306 "params": { 00:06:13.306 "trtype": "pcie", 00:06:13.306 "traddr": "0000:00:10.0", 00:06:13.306 "name": "Nvme0" 00:06:13.306 }, 00:06:13.306 "method": "bdev_nvme_attach_controller" 00:06:13.306 }, 00:06:13.306 { 00:06:13.306 "params": { 00:06:13.306 "trtype": "pcie", 00:06:13.306 "traddr": "0000:00:11.0", 00:06:13.306 "name": "Nvme1" 00:06:13.306 }, 00:06:13.306 "method": "bdev_nvme_attach_controller" 00:06:13.306 }, 00:06:13.306 { 00:06:13.306 "method": "bdev_wait_for_examine" 00:06:13.306 } 00:06:13.306 ] 00:06:13.306 } 00:06:13.306 ] 00:06:13.306 } 00:06:13.565 [2024-10-01 13:34:05.219068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.565 [2024-10-01 13:34:05.271815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.565 [2024-10-01 13:34:05.299965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.109  Copying: 65/65 [MB] (average 1031 MBps) 00:06:14.109 00:06:14.109 13:34:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:14.109 13:34:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:14.109 13:34:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:14.109 13:34:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:14.109 { 00:06:14.109 "subsystems": [ 00:06:14.109 { 00:06:14.109 "subsystem": "bdev", 00:06:14.109 "config": [ 00:06:14.109 { 00:06:14.109 "params": { 00:06:14.109 "trtype": "pcie", 00:06:14.109 "traddr": "0000:00:10.0", 00:06:14.109 "name": "Nvme0" 00:06:14.109 }, 00:06:14.109 "method": "bdev_nvme_attach_controller" 00:06:14.109 }, 00:06:14.109 { 00:06:14.109 "params": { 00:06:14.109 "trtype": "pcie", 00:06:14.109 "traddr": "0000:00:11.0", 00:06:14.109 "name": "Nvme1" 00:06:14.109 }, 00:06:14.109 "method": "bdev_nvme_attach_controller" 00:06:14.109 }, 00:06:14.109 { 00:06:14.109 "method": "bdev_wait_for_examine" 00:06:14.109 } 00:06:14.109 ] 00:06:14.109 } 00:06:14.109 ] 00:06:14.109 } 00:06:14.109 [2024-10-01 13:34:05.774684] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:14.109 [2024-10-01 13:34:05.774776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60820 ] 00:06:14.109 [2024-10-01 13:34:05.911166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.109 [2024-10-01 13:34:05.960657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.368 [2024-10-01 13:34:05.989242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.628  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:14.628 00:06:14.628 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:14.628 ************************************ 00:06:14.628 END TEST dd_offset_magic 00:06:14.628 ************************************ 00:06:14.628 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:14.628 00:06:14.628 real 0m2.542s 00:06:14.628 user 0m1.913s 00:06:14.628 sys 0m0.600s 00:06:14.628 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.628 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:14.628 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:14.628 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:14.628 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:14.628 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:14.628 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:14.628 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:14.628 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:14.628 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:14.628 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:14.628 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:14.628 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:14.628 { 00:06:14.628 "subsystems": [ 00:06:14.628 { 00:06:14.628 "subsystem": "bdev", 00:06:14.628 "config": [ 00:06:14.628 { 00:06:14.628 "params": { 00:06:14.628 "trtype": "pcie", 00:06:14.628 "traddr": "0000:00:10.0", 00:06:14.628 "name": "Nvme0" 00:06:14.628 }, 00:06:14.628 "method": "bdev_nvme_attach_controller" 00:06:14.628 }, 00:06:14.628 { 00:06:14.628 "params": { 00:06:14.628 "trtype": "pcie", 00:06:14.628 "traddr": "0000:00:11.0", 00:06:14.628 "name": "Nvme1" 00:06:14.628 }, 00:06:14.628 "method": "bdev_nvme_attach_controller" 00:06:14.628 }, 00:06:14.628 { 00:06:14.628 "method": "bdev_wait_for_examine" 00:06:14.628 } 00:06:14.628 ] 00:06:14.628 } 00:06:14.628 ] 00:06:14.628 } 00:06:14.628 [2024-10-01 13:34:06.396819] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:14.628 [2024-10-01 13:34:06.396913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60852 ] 00:06:14.887 [2024-10-01 13:34:06.531524] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.887 [2024-10-01 13:34:06.580544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.887 [2024-10-01 13:34:06.608193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.146  Copying: 5120/5120 [kB] (average 1250 MBps) 00:06:15.146 00:06:15.146 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:15.146 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:15.146 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:15.146 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:15.146 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:15.146 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:15.146 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:15.146 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:15.146 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:15.146 13:34:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:15.146 [2024-10-01 13:34:06.976755] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:15.146 [2024-10-01 13:34:06.976853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60873 ] 00:06:15.146 { 00:06:15.146 "subsystems": [ 00:06:15.146 { 00:06:15.146 "subsystem": "bdev", 00:06:15.146 "config": [ 00:06:15.146 { 00:06:15.146 "params": { 00:06:15.146 "trtype": "pcie", 00:06:15.146 "traddr": "0000:00:10.0", 00:06:15.146 "name": "Nvme0" 00:06:15.146 }, 00:06:15.146 "method": "bdev_nvme_attach_controller" 00:06:15.146 }, 00:06:15.146 { 00:06:15.146 "params": { 00:06:15.146 "trtype": "pcie", 00:06:15.146 "traddr": "0000:00:11.0", 00:06:15.146 "name": "Nvme1" 00:06:15.146 }, 00:06:15.146 "method": "bdev_nvme_attach_controller" 00:06:15.146 }, 00:06:15.146 { 00:06:15.146 "method": "bdev_wait_for_examine" 00:06:15.146 } 00:06:15.146 ] 00:06:15.146 } 00:06:15.146 ] 00:06:15.146 } 00:06:15.405 [2024-10-01 13:34:07.112980] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.405 [2024-10-01 13:34:07.167073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.405 [2024-10-01 13:34:07.194809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.663  Copying: 5120/5120 [kB] (average 833 MBps) 00:06:15.663 00:06:15.664 13:34:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:15.923 ************************************ 00:06:15.924 END TEST spdk_dd_bdev_to_bdev 00:06:15.924 ************************************ 00:06:15.924 00:06:15.924 real 0m6.358s 00:06:15.924 user 0m4.817s 00:06:15.924 sys 0m2.813s 00:06:15.924 13:34:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.924 13:34:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:15.924 13:34:07 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:15.924 13:34:07 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:15.924 13:34:07 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.924 13:34:07 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.924 13:34:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:15.924 ************************************ 00:06:15.924 START TEST spdk_dd_uring 00:06:15.924 ************************************ 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:15.924 * Looking for test storage... 00:06:15.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:15.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.924 --rc genhtml_branch_coverage=1 00:06:15.924 --rc genhtml_function_coverage=1 00:06:15.924 --rc genhtml_legend=1 00:06:15.924 --rc geninfo_all_blocks=1 00:06:15.924 --rc geninfo_unexecuted_blocks=1 00:06:15.924 00:06:15.924 ' 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:15.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.924 --rc genhtml_branch_coverage=1 00:06:15.924 --rc genhtml_function_coverage=1 00:06:15.924 --rc genhtml_legend=1 00:06:15.924 --rc geninfo_all_blocks=1 00:06:15.924 --rc geninfo_unexecuted_blocks=1 00:06:15.924 00:06:15.924 ' 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:15.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.924 --rc genhtml_branch_coverage=1 00:06:15.924 --rc genhtml_function_coverage=1 00:06:15.924 --rc genhtml_legend=1 00:06:15.924 --rc geninfo_all_blocks=1 00:06:15.924 --rc geninfo_unexecuted_blocks=1 00:06:15.924 00:06:15.924 ' 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:15.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.924 --rc genhtml_branch_coverage=1 00:06:15.924 --rc genhtml_function_coverage=1 00:06:15.924 --rc genhtml_legend=1 00:06:15.924 --rc geninfo_all_blocks=1 00:06:15.924 --rc geninfo_unexecuted_blocks=1 00:06:15.924 00:06:15.924 ' 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.924 13:34:07 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:15.925 ************************************ 00:06:15.925 START TEST dd_uring_copy 00:06:15.925 ************************************ 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:15.925 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:16.185 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=y38yrvx6siwtm519fnzn8pzqru4kouyrqjsxbysxgaur9wz5z6vvuran6qcb7xhmk1el5rtl3g1txiy3thiwsnconhk72wni7wv48q3ndp1lwmrnr9spncfi6iu4xlbahpbgkidfm82idlmr753v05oliy22ypvqefbfdwbsuy6gtjkut9wa1efu1xccjkpagjrandsrwtl9pmagyzw6mdefvs057hh9edguxi4lne6na58ith6orx2cjoj4se5lr3b91rxlxa74wi695vf54ol2v5cwfviiqql3ycmr9bdo6tl56tnn7a4cv1ejocgjsc2mo95q7osv116ipy24kz7se8asbq784dwpweo3cck9vs0ejr9l1b8hsngbf8rux66xbdbwc0khnoibytjeyv5gochkl91gj4e9a1ksuwqbines9a11v4s5z1pkcvrmf934q8kjykn1p5z66csoftnea4q2fo2fddh9264ln1ospncnfctcz9hny9j8ces76wh1j5rvii7s4rglbmckc1iq5710q70yt8fau2sjnworfdjr2iphjlo1m3dx9x5f73o94klsr1poajp2rcjyerxk6iwwa1vxih2gbnz54ska3dudsekeng0neqw0d1agl173o7ywvusv2hfn9mvksi7070cfk3rfce5w3rqmv3033q7ui358uesrfmu9msn6tt5e95slftwhrh7qovje3pys2114mozu42hsxthuddg0ea75e5ltl237nhw1qt9ylagviqzfg4ed7x1qcrhh5by76clwbwr0go7w0c5i2yzq4nrnf3s2xkjfwyfngime9wu1txvasmpkxrbky02z102jczz5ek1c3ybj10sc8xo56nwtqvg1xw6e74gh6grqfl588m1xejexydrqxx2a0gv61z9hq8aqomh2q0pp4c4pzk6ry81sykkngdh5c1dm1q1a7f4cipk5d9z6mxfaoa6fd9fry44w6ff3a2zq853aagf5b5od1b38n48n4hjl 00:06:16.185 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo y38yrvx6siwtm519fnzn8pzqru4kouyrqjsxbysxgaur9wz5z6vvuran6qcb7xhmk1el5rtl3g1txiy3thiwsnconhk72wni7wv48q3ndp1lwmrnr9spncfi6iu4xlbahpbgkidfm82idlmr753v05oliy22ypvqefbfdwbsuy6gtjkut9wa1efu1xccjkpagjrandsrwtl9pmagyzw6mdefvs057hh9edguxi4lne6na58ith6orx2cjoj4se5lr3b91rxlxa74wi695vf54ol2v5cwfviiqql3ycmr9bdo6tl56tnn7a4cv1ejocgjsc2mo95q7osv116ipy24kz7se8asbq784dwpweo3cck9vs0ejr9l1b8hsngbf8rux66xbdbwc0khnoibytjeyv5gochkl91gj4e9a1ksuwqbines9a11v4s5z1pkcvrmf934q8kjykn1p5z66csoftnea4q2fo2fddh9264ln1ospncnfctcz9hny9j8ces76wh1j5rvii7s4rglbmckc1iq5710q70yt8fau2sjnworfdjr2iphjlo1m3dx9x5f73o94klsr1poajp2rcjyerxk6iwwa1vxih2gbnz54ska3dudsekeng0neqw0d1agl173o7ywvusv2hfn9mvksi7070cfk3rfce5w3rqmv3033q7ui358uesrfmu9msn6tt5e95slftwhrh7qovje3pys2114mozu42hsxthuddg0ea75e5ltl237nhw1qt9ylagviqzfg4ed7x1qcrhh5by76clwbwr0go7w0c5i2yzq4nrnf3s2xkjfwyfngime9wu1txvasmpkxrbky02z102jczz5ek1c3ybj10sc8xo56nwtqvg1xw6e74gh6grqfl588m1xejexydrqxx2a0gv61z9hq8aqomh2q0pp4c4pzk6ry81sykkngdh5c1dm1q1a7f4cipk5d9z6mxfaoa6fd9fry44w6ff3a2zq853aagf5b5od1b38n48n4hjl 00:06:16.185 13:34:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:16.185 [2024-10-01 13:34:07.837282] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:16.185 [2024-10-01 13:34:07.837376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60946 ] 00:06:16.185 [2024-10-01 13:34:07.966924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.185 [2024-10-01 13:34:08.030418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.444 [2024-10-01 13:34:08.061518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.012  Copying: 511/511 [MB] (average 1292 MBps) 00:06:17.012 00:06:17.012 13:34:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:17.012 13:34:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:17.012 13:34:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:17.012 13:34:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:17.273 [2024-10-01 13:34:08.914872] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:17.273 [2024-10-01 13:34:08.915395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60963 ] 00:06:17.273 { 00:06:17.273 "subsystems": [ 00:06:17.273 { 00:06:17.273 "subsystem": "bdev", 00:06:17.273 "config": [ 00:06:17.273 { 00:06:17.273 "params": { 00:06:17.273 "block_size": 512, 00:06:17.273 "num_blocks": 1048576, 00:06:17.273 "name": "malloc0" 00:06:17.273 }, 00:06:17.273 "method": "bdev_malloc_create" 00:06:17.273 }, 00:06:17.273 { 00:06:17.273 "params": { 00:06:17.273 "filename": "/dev/zram1", 00:06:17.273 "name": "uring0" 00:06:17.273 }, 00:06:17.273 "method": "bdev_uring_create" 00:06:17.273 }, 00:06:17.273 { 00:06:17.273 "method": "bdev_wait_for_examine" 00:06:17.273 } 00:06:17.273 ] 00:06:17.273 } 00:06:17.273 ] 00:06:17.273 } 00:06:17.273 [2024-10-01 13:34:09.049460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.273 [2024-10-01 13:34:09.109132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.532 [2024-10-01 13:34:09.140099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.031  Copying: 223/512 [MB] (223 MBps) Copying: 450/512 [MB] (226 MBps) Copying: 512/512 [MB] (average 224 MBps) 00:06:20.031 00:06:20.031 13:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:20.031 13:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:20.031 13:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:20.031 13:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:20.031 { 00:06:20.031 "subsystems": [ 00:06:20.031 { 00:06:20.031 "subsystem": "bdev", 00:06:20.031 "config": [ 00:06:20.031 { 00:06:20.031 "params": { 00:06:20.031 "block_size": 512, 00:06:20.031 "num_blocks": 1048576, 00:06:20.031 "name": "malloc0" 00:06:20.031 }, 00:06:20.031 "method": "bdev_malloc_create" 00:06:20.031 }, 00:06:20.031 { 00:06:20.031 "params": { 00:06:20.031 "filename": "/dev/zram1", 00:06:20.031 "name": "uring0" 00:06:20.031 }, 00:06:20.031 "method": "bdev_uring_create" 00:06:20.031 }, 00:06:20.031 { 00:06:20.031 "method": "bdev_wait_for_examine" 00:06:20.031 } 00:06:20.031 ] 00:06:20.031 } 00:06:20.031 ] 00:06:20.031 } 00:06:20.031 [2024-10-01 13:34:11.859377] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:20.031 [2024-10-01 13:34:11.859469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61007 ] 00:06:20.293 [2024-10-01 13:34:11.992696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.293 [2024-10-01 13:34:12.057752] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.293 [2024-10-01 13:34:12.090334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.850  Copying: 177/512 [MB] (177 MBps) Copying: 349/512 [MB] (172 MBps) Copying: 496/512 [MB] (146 MBps) Copying: 512/512 [MB] (average 165 MBps) 00:06:23.850 00:06:23.850 13:34:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:23.851 13:34:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ y38yrvx6siwtm519fnzn8pzqru4kouyrqjsxbysxgaur9wz5z6vvuran6qcb7xhmk1el5rtl3g1txiy3thiwsnconhk72wni7wv48q3ndp1lwmrnr9spncfi6iu4xlbahpbgkidfm82idlmr753v05oliy22ypvqefbfdwbsuy6gtjkut9wa1efu1xccjkpagjrandsrwtl9pmagyzw6mdefvs057hh9edguxi4lne6na58ith6orx2cjoj4se5lr3b91rxlxa74wi695vf54ol2v5cwfviiqql3ycmr9bdo6tl56tnn7a4cv1ejocgjsc2mo95q7osv116ipy24kz7se8asbq784dwpweo3cck9vs0ejr9l1b8hsngbf8rux66xbdbwc0khnoibytjeyv5gochkl91gj4e9a1ksuwqbines9a11v4s5z1pkcvrmf934q8kjykn1p5z66csoftnea4q2fo2fddh9264ln1ospncnfctcz9hny9j8ces76wh1j5rvii7s4rglbmckc1iq5710q70yt8fau2sjnworfdjr2iphjlo1m3dx9x5f73o94klsr1poajp2rcjyerxk6iwwa1vxih2gbnz54ska3dudsekeng0neqw0d1agl173o7ywvusv2hfn9mvksi7070cfk3rfce5w3rqmv3033q7ui358uesrfmu9msn6tt5e95slftwhrh7qovje3pys2114mozu42hsxthuddg0ea75e5ltl237nhw1qt9ylagviqzfg4ed7x1qcrhh5by76clwbwr0go7w0c5i2yzq4nrnf3s2xkjfwyfngime9wu1txvasmpkxrbky02z102jczz5ek1c3ybj10sc8xo56nwtqvg1xw6e74gh6grqfl588m1xejexydrqxx2a0gv61z9hq8aqomh2q0pp4c4pzk6ry81sykkngdh5c1dm1q1a7f4cipk5d9z6mxfaoa6fd9fry44w6ff3a2zq853aagf5b5od1b38n48n4hjl == \y\3\8\y\r\v\x\6\s\i\w\t\m\5\1\9\f\n\z\n\8\p\z\q\r\u\4\k\o\u\y\r\q\j\s\x\b\y\s\x\g\a\u\r\9\w\z\5\z\6\v\v\u\r\a\n\6\q\c\b\7\x\h\m\k\1\e\l\5\r\t\l\3\g\1\t\x\i\y\3\t\h\i\w\s\n\c\o\n\h\k\7\2\w\n\i\7\w\v\4\8\q\3\n\d\p\1\l\w\m\r\n\r\9\s\p\n\c\f\i\6\i\u\4\x\l\b\a\h\p\b\g\k\i\d\f\m\8\2\i\d\l\m\r\7\5\3\v\0\5\o\l\i\y\2\2\y\p\v\q\e\f\b\f\d\w\b\s\u\y\6\g\t\j\k\u\t\9\w\a\1\e\f\u\1\x\c\c\j\k\p\a\g\j\r\a\n\d\s\r\w\t\l\9\p\m\a\g\y\z\w\6\m\d\e\f\v\s\0\5\7\h\h\9\e\d\g\u\x\i\4\l\n\e\6\n\a\5\8\i\t\h\6\o\r\x\2\c\j\o\j\4\s\e\5\l\r\3\b\9\1\r\x\l\x\a\7\4\w\i\6\9\5\v\f\5\4\o\l\2\v\5\c\w\f\v\i\i\q\q\l\3\y\c\m\r\9\b\d\o\6\t\l\5\6\t\n\n\7\a\4\c\v\1\e\j\o\c\g\j\s\c\2\m\o\9\5\q\7\o\s\v\1\1\6\i\p\y\2\4\k\z\7\s\e\8\a\s\b\q\7\8\4\d\w\p\w\e\o\3\c\c\k\9\v\s\0\e\j\r\9\l\1\b\8\h\s\n\g\b\f\8\r\u\x\6\6\x\b\d\b\w\c\0\k\h\n\o\i\b\y\t\j\e\y\v\5\g\o\c\h\k\l\9\1\g\j\4\e\9\a\1\k\s\u\w\q\b\i\n\e\s\9\a\1\1\v\4\s\5\z\1\p\k\c\v\r\m\f\9\3\4\q\8\k\j\y\k\n\1\p\5\z\6\6\c\s\o\f\t\n\e\a\4\q\2\f\o\2\f\d\d\h\9\2\6\4\l\n\1\o\s\p\n\c\n\f\c\t\c\z\9\h\n\y\9\j\8\c\e\s\7\6\w\h\1\j\5\r\v\i\i\7\s\4\r\g\l\b\m\c\k\c\1\i\q\5\7\1\0\q\7\0\y\t\8\f\a\u\2\s\j\n\w\o\r\f\d\j\r\2\i\p\h\j\l\o\1\m\3\d\x\9\x\5\f\7\3\o\9\4\k\l\s\r\1\p\o\a\j\p\2\r\c\j\y\e\r\x\k\6\i\w\w\a\1\v\x\i\h\2\g\b\n\z\5\4\s\k\a\3\d\u\d\s\e\k\e\n\g\0\n\e\q\w\0\d\1\a\g\l\1\7\3\o\7\y\w\v\u\s\v\2\h\f\n\9\m\v\k\s\i\7\0\7\0\c\f\k\3\r\f\c\e\5\w\3\r\q\m\v\3\0\3\3\q\7\u\i\3\5\8\u\e\s\r\f\m\u\9\m\s\n\6\t\t\5\e\9\5\s\l\f\t\w\h\r\h\7\q\o\v\j\e\3\p\y\s\2\1\1\4\m\o\z\u\4\2\h\s\x\t\h\u\d\d\g\0\e\a\7\5\e\5\l\t\l\2\3\7\n\h\w\1\q\t\9\y\l\a\g\v\i\q\z\f\g\4\e\d\7\x\1\q\c\r\h\h\5\b\y\7\6\c\l\w\b\w\r\0\g\o\7\w\0\c\5\i\2\y\z\q\4\n\r\n\f\3\s\2\x\k\j\f\w\y\f\n\g\i\m\e\9\w\u\1\t\x\v\a\s\m\p\k\x\r\b\k\y\0\2\z\1\0\2\j\c\z\z\5\e\k\1\c\3\y\b\j\1\0\s\c\8\x\o\5\6\n\w\t\q\v\g\1\x\w\6\e\7\4\g\h\6\g\r\q\f\l\5\8\8\m\1\x\e\j\e\x\y\d\r\q\x\x\2\a\0\g\v\6\1\z\9\h\q\8\a\q\o\m\h\2\q\0\p\p\4\c\4\p\z\k\6\r\y\8\1\s\y\k\k\n\g\d\h\5\c\1\d\m\1\q\1\a\7\f\4\c\i\p\k\5\d\9\z\6\m\x\f\a\o\a\6\f\d\9\f\r\y\4\4\w\6\f\f\3\a\2\z\q\8\5\3\a\a\g\f\5\b\5\o\d\1\b\3\8\n\4\8\n\4\h\j\l ]] 00:06:23.851 13:34:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:23.851 13:34:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ y38yrvx6siwtm519fnzn8pzqru4kouyrqjsxbysxgaur9wz5z6vvuran6qcb7xhmk1el5rtl3g1txiy3thiwsnconhk72wni7wv48q3ndp1lwmrnr9spncfi6iu4xlbahpbgkidfm82idlmr753v05oliy22ypvqefbfdwbsuy6gtjkut9wa1efu1xccjkpagjrandsrwtl9pmagyzw6mdefvs057hh9edguxi4lne6na58ith6orx2cjoj4se5lr3b91rxlxa74wi695vf54ol2v5cwfviiqql3ycmr9bdo6tl56tnn7a4cv1ejocgjsc2mo95q7osv116ipy24kz7se8asbq784dwpweo3cck9vs0ejr9l1b8hsngbf8rux66xbdbwc0khnoibytjeyv5gochkl91gj4e9a1ksuwqbines9a11v4s5z1pkcvrmf934q8kjykn1p5z66csoftnea4q2fo2fddh9264ln1ospncnfctcz9hny9j8ces76wh1j5rvii7s4rglbmckc1iq5710q70yt8fau2sjnworfdjr2iphjlo1m3dx9x5f73o94klsr1poajp2rcjyerxk6iwwa1vxih2gbnz54ska3dudsekeng0neqw0d1agl173o7ywvusv2hfn9mvksi7070cfk3rfce5w3rqmv3033q7ui358uesrfmu9msn6tt5e95slftwhrh7qovje3pys2114mozu42hsxthuddg0ea75e5ltl237nhw1qt9ylagviqzfg4ed7x1qcrhh5by76clwbwr0go7w0c5i2yzq4nrnf3s2xkjfwyfngime9wu1txvasmpkxrbky02z102jczz5ek1c3ybj10sc8xo56nwtqvg1xw6e74gh6grqfl588m1xejexydrqxx2a0gv61z9hq8aqomh2q0pp4c4pzk6ry81sykkngdh5c1dm1q1a7f4cipk5d9z6mxfaoa6fd9fry44w6ff3a2zq853aagf5b5od1b38n48n4hjl == \y\3\8\y\r\v\x\6\s\i\w\t\m\5\1\9\f\n\z\n\8\p\z\q\r\u\4\k\o\u\y\r\q\j\s\x\b\y\s\x\g\a\u\r\9\w\z\5\z\6\v\v\u\r\a\n\6\q\c\b\7\x\h\m\k\1\e\l\5\r\t\l\3\g\1\t\x\i\y\3\t\h\i\w\s\n\c\o\n\h\k\7\2\w\n\i\7\w\v\4\8\q\3\n\d\p\1\l\w\m\r\n\r\9\s\p\n\c\f\i\6\i\u\4\x\l\b\a\h\p\b\g\k\i\d\f\m\8\2\i\d\l\m\r\7\5\3\v\0\5\o\l\i\y\2\2\y\p\v\q\e\f\b\f\d\w\b\s\u\y\6\g\t\j\k\u\t\9\w\a\1\e\f\u\1\x\c\c\j\k\p\a\g\j\r\a\n\d\s\r\w\t\l\9\p\m\a\g\y\z\w\6\m\d\e\f\v\s\0\5\7\h\h\9\e\d\g\u\x\i\4\l\n\e\6\n\a\5\8\i\t\h\6\o\r\x\2\c\j\o\j\4\s\e\5\l\r\3\b\9\1\r\x\l\x\a\7\4\w\i\6\9\5\v\f\5\4\o\l\2\v\5\c\w\f\v\i\i\q\q\l\3\y\c\m\r\9\b\d\o\6\t\l\5\6\t\n\n\7\a\4\c\v\1\e\j\o\c\g\j\s\c\2\m\o\9\5\q\7\o\s\v\1\1\6\i\p\y\2\4\k\z\7\s\e\8\a\s\b\q\7\8\4\d\w\p\w\e\o\3\c\c\k\9\v\s\0\e\j\r\9\l\1\b\8\h\s\n\g\b\f\8\r\u\x\6\6\x\b\d\b\w\c\0\k\h\n\o\i\b\y\t\j\e\y\v\5\g\o\c\h\k\l\9\1\g\j\4\e\9\a\1\k\s\u\w\q\b\i\n\e\s\9\a\1\1\v\4\s\5\z\1\p\k\c\v\r\m\f\9\3\4\q\8\k\j\y\k\n\1\p\5\z\6\6\c\s\o\f\t\n\e\a\4\q\2\f\o\2\f\d\d\h\9\2\6\4\l\n\1\o\s\p\n\c\n\f\c\t\c\z\9\h\n\y\9\j\8\c\e\s\7\6\w\h\1\j\5\r\v\i\i\7\s\4\r\g\l\b\m\c\k\c\1\i\q\5\7\1\0\q\7\0\y\t\8\f\a\u\2\s\j\n\w\o\r\f\d\j\r\2\i\p\h\j\l\o\1\m\3\d\x\9\x\5\f\7\3\o\9\4\k\l\s\r\1\p\o\a\j\p\2\r\c\j\y\e\r\x\k\6\i\w\w\a\1\v\x\i\h\2\g\b\n\z\5\4\s\k\a\3\d\u\d\s\e\k\e\n\g\0\n\e\q\w\0\d\1\a\g\l\1\7\3\o\7\y\w\v\u\s\v\2\h\f\n\9\m\v\k\s\i\7\0\7\0\c\f\k\3\r\f\c\e\5\w\3\r\q\m\v\3\0\3\3\q\7\u\i\3\5\8\u\e\s\r\f\m\u\9\m\s\n\6\t\t\5\e\9\5\s\l\f\t\w\h\r\h\7\q\o\v\j\e\3\p\y\s\2\1\1\4\m\o\z\u\4\2\h\s\x\t\h\u\d\d\g\0\e\a\7\5\e\5\l\t\l\2\3\7\n\h\w\1\q\t\9\y\l\a\g\v\i\q\z\f\g\4\e\d\7\x\1\q\c\r\h\h\5\b\y\7\6\c\l\w\b\w\r\0\g\o\7\w\0\c\5\i\2\y\z\q\4\n\r\n\f\3\s\2\x\k\j\f\w\y\f\n\g\i\m\e\9\w\u\1\t\x\v\a\s\m\p\k\x\r\b\k\y\0\2\z\1\0\2\j\c\z\z\5\e\k\1\c\3\y\b\j\1\0\s\c\8\x\o\5\6\n\w\t\q\v\g\1\x\w\6\e\7\4\g\h\6\g\r\q\f\l\5\8\8\m\1\x\e\j\e\x\y\d\r\q\x\x\2\a\0\g\v\6\1\z\9\h\q\8\a\q\o\m\h\2\q\0\p\p\4\c\4\p\z\k\6\r\y\8\1\s\y\k\k\n\g\d\h\5\c\1\d\m\1\q\1\a\7\f\4\c\i\p\k\5\d\9\z\6\m\x\f\a\o\a\6\f\d\9\f\r\y\4\4\w\6\f\f\3\a\2\z\q\8\5\3\a\a\g\f\5\b\5\o\d\1\b\3\8\n\4\8\n\4\h\j\l ]] 00:06:23.851 13:34:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:24.417 13:34:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:24.417 13:34:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:24.417 13:34:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:24.417 13:34:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:24.417 [2024-10-01 13:34:16.034888] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:24.417 [2024-10-01 13:34:16.035013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61077 ] 00:06:24.417 { 00:06:24.417 "subsystems": [ 00:06:24.417 { 00:06:24.417 "subsystem": "bdev", 00:06:24.417 "config": [ 00:06:24.417 { 00:06:24.417 "params": { 00:06:24.417 "block_size": 512, 00:06:24.417 "num_blocks": 1048576, 00:06:24.417 "name": "malloc0" 00:06:24.417 }, 00:06:24.417 "method": "bdev_malloc_create" 00:06:24.417 }, 00:06:24.417 { 00:06:24.417 "params": { 00:06:24.417 "filename": "/dev/zram1", 00:06:24.417 "name": "uring0" 00:06:24.417 }, 00:06:24.417 "method": "bdev_uring_create" 00:06:24.417 }, 00:06:24.417 { 00:06:24.417 "method": "bdev_wait_for_examine" 00:06:24.417 } 00:06:24.417 ] 00:06:24.417 } 00:06:24.417 ] 00:06:24.417 } 00:06:24.417 [2024-10-01 13:34:16.173878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.417 [2024-10-01 13:34:16.241828] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.417 [2024-10-01 13:34:16.274484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.267  Copying: 150/512 [MB] (150 MBps) Copying: 300/512 [MB] (150 MBps) Copying: 446/512 [MB] (145 MBps) Copying: 512/512 [MB] (average 148 MBps) 00:06:28.267 00:06:28.267 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:28.267 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:28.267 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:28.267 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:28.267 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:28.267 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:28.267 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:28.267 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:28.525 [2024-10-01 13:34:20.174991] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:28.525 [2024-10-01 13:34:20.175111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61133 ] 00:06:28.525 { 00:06:28.525 "subsystems": [ 00:06:28.525 { 00:06:28.525 "subsystem": "bdev", 00:06:28.525 "config": [ 00:06:28.525 { 00:06:28.525 "params": { 00:06:28.525 "block_size": 512, 00:06:28.526 "num_blocks": 1048576, 00:06:28.526 "name": "malloc0" 00:06:28.526 }, 00:06:28.526 "method": "bdev_malloc_create" 00:06:28.526 }, 00:06:28.526 { 00:06:28.526 "params": { 00:06:28.526 "filename": "/dev/zram1", 00:06:28.526 "name": "uring0" 00:06:28.526 }, 00:06:28.526 "method": "bdev_uring_create" 00:06:28.526 }, 00:06:28.526 { 00:06:28.526 "params": { 00:06:28.526 "name": "uring0" 00:06:28.526 }, 00:06:28.526 "method": "bdev_uring_delete" 00:06:28.526 }, 00:06:28.526 { 00:06:28.526 "method": "bdev_wait_for_examine" 00:06:28.526 } 00:06:28.526 ] 00:06:28.526 } 00:06:28.526 ] 00:06:28.526 } 00:06:28.526 [2024-10-01 13:34:20.312002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.526 [2024-10-01 13:34:20.375856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.785 [2024-10-01 13:34:20.408954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.045  Copying: 0/0 [B] (average 0 Bps) 00:06:29.045 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:29.045 13:34:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:29.045 { 00:06:29.045 "subsystems": [ 00:06:29.045 { 00:06:29.045 "subsystem": "bdev", 00:06:29.045 "config": [ 00:06:29.045 { 00:06:29.045 "params": { 00:06:29.045 "block_size": 512, 00:06:29.045 "num_blocks": 1048576, 00:06:29.045 "name": "malloc0" 00:06:29.045 }, 00:06:29.045 "method": "bdev_malloc_create" 00:06:29.045 }, 00:06:29.045 { 00:06:29.045 "params": { 00:06:29.045 "filename": "/dev/zram1", 00:06:29.045 "name": "uring0" 00:06:29.045 }, 00:06:29.045 "method": "bdev_uring_create" 00:06:29.045 }, 00:06:29.045 { 00:06:29.045 "params": { 00:06:29.045 "name": "uring0" 00:06:29.045 }, 00:06:29.045 "method": "bdev_uring_delete" 00:06:29.045 }, 00:06:29.045 { 00:06:29.045 "method": "bdev_wait_for_examine" 00:06:29.045 } 00:06:29.045 ] 00:06:29.045 } 00:06:29.045 ] 00:06:29.045 } 00:06:29.045 [2024-10-01 13:34:20.868226] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:29.045 [2024-10-01 13:34:20.868334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61162 ] 00:06:29.304 [2024-10-01 13:34:21.007707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.304 [2024-10-01 13:34:21.061699] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.304 [2024-10-01 13:34:21.090067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.572 [2024-10-01 13:34:21.214020] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:29.572 [2024-10-01 13:34:21.214093] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:29.572 [2024-10-01 13:34:21.214119] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:06:29.572 [2024-10-01 13:34:21.214128] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.572 [2024-10-01 13:34:21.388155] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:29.852 13:34:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:06:29.852 13:34:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.852 13:34:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:06:29.852 13:34:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:06:29.852 13:34:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:06:29.852 13:34:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.852 13:34:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:29.852 13:34:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:29.852 13:34:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:29.852 13:34:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:29.852 13:34:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:29.852 13:34:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:29.852 00:06:29.852 real 0m13.934s 00:06:29.852 user 0m9.582s 00:06:29.852 sys 0m11.886s 00:06:29.852 13:34:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.852 13:34:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:29.852 ************************************ 00:06:29.852 END TEST dd_uring_copy 00:06:29.852 ************************************ 00:06:30.111 ************************************ 00:06:30.111 END TEST spdk_dd_uring 00:06:30.111 ************************************ 00:06:30.111 00:06:30.111 real 0m14.163s 00:06:30.111 user 0m9.711s 00:06:30.111 sys 0m11.991s 00:06:30.111 13:34:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.111 13:34:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:30.111 13:34:21 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:30.111 13:34:21 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.111 13:34:21 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.111 13:34:21 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:30.111 ************************************ 00:06:30.111 START TEST spdk_dd_sparse 00:06:30.111 ************************************ 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:30.111 * Looking for test storage... 00:06:30.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:30.111 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.112 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:30.112 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.112 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:30.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.371 --rc genhtml_branch_coverage=1 00:06:30.371 --rc genhtml_function_coverage=1 00:06:30.371 --rc genhtml_legend=1 00:06:30.371 --rc geninfo_all_blocks=1 00:06:30.371 --rc geninfo_unexecuted_blocks=1 00:06:30.371 00:06:30.371 ' 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:30.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.371 --rc genhtml_branch_coverage=1 00:06:30.371 --rc genhtml_function_coverage=1 00:06:30.371 --rc genhtml_legend=1 00:06:30.371 --rc geninfo_all_blocks=1 00:06:30.371 --rc geninfo_unexecuted_blocks=1 00:06:30.371 00:06:30.371 ' 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:30.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.371 --rc genhtml_branch_coverage=1 00:06:30.371 --rc genhtml_function_coverage=1 00:06:30.371 --rc genhtml_legend=1 00:06:30.371 --rc geninfo_all_blocks=1 00:06:30.371 --rc geninfo_unexecuted_blocks=1 00:06:30.371 00:06:30.371 ' 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:30.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.371 --rc genhtml_branch_coverage=1 00:06:30.371 --rc genhtml_function_coverage=1 00:06:30.371 --rc genhtml_legend=1 00:06:30.371 --rc geninfo_all_blocks=1 00:06:30.371 --rc geninfo_unexecuted_blocks=1 00:06:30.371 00:06:30.371 ' 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:30.371 1+0 records in 00:06:30.371 1+0 records out 00:06:30.371 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00699482 s, 600 MB/s 00:06:30.371 13:34:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:30.371 1+0 records in 00:06:30.371 1+0 records out 00:06:30.371 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00663279 s, 632 MB/s 00:06:30.371 13:34:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:30.371 1+0 records in 00:06:30.371 1+0 records out 00:06:30.371 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00666872 s, 629 MB/s 00:06:30.371 13:34:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:30.371 13:34:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.371 13:34:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.371 13:34:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:30.371 ************************************ 00:06:30.371 START TEST dd_sparse_file_to_file 00:06:30.371 ************************************ 00:06:30.371 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:06:30.371 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:30.371 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:30.371 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:30.371 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:30.371 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:30.372 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:30.372 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:30.372 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:30.372 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:30.372 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:30.372 [2024-10-01 13:34:22.086444] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:30.372 [2024-10-01 13:34:22.086560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61256 ] 00:06:30.372 { 00:06:30.372 "subsystems": [ 00:06:30.372 { 00:06:30.372 "subsystem": "bdev", 00:06:30.372 "config": [ 00:06:30.372 { 00:06:30.372 "params": { 00:06:30.372 "block_size": 4096, 00:06:30.372 "filename": "dd_sparse_aio_disk", 00:06:30.372 "name": "dd_aio" 00:06:30.372 }, 00:06:30.372 "method": "bdev_aio_create" 00:06:30.372 }, 00:06:30.372 { 00:06:30.372 "params": { 00:06:30.372 "lvs_name": "dd_lvstore", 00:06:30.372 "bdev_name": "dd_aio" 00:06:30.372 }, 00:06:30.372 "method": "bdev_lvol_create_lvstore" 00:06:30.372 }, 00:06:30.372 { 00:06:30.372 "method": "bdev_wait_for_examine" 00:06:30.372 } 00:06:30.372 ] 00:06:30.372 } 00:06:30.372 ] 00:06:30.372 } 00:06:30.372 [2024-10-01 13:34:22.220492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.630 [2024-10-01 13:34:22.291612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.630 [2024-10-01 13:34:22.325764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.889  Copying: 12/36 [MB] (average 923 MBps) 00:06:30.889 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:30.889 00:06:30.889 real 0m0.604s 00:06:30.889 user 0m0.364s 00:06:30.889 sys 0m0.282s 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:30.889 ************************************ 00:06:30.889 END TEST dd_sparse_file_to_file 00:06:30.889 ************************************ 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:30.889 ************************************ 00:06:30.889 START TEST dd_sparse_file_to_bdev 00:06:30.889 ************************************ 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:30.889 13:34:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:30.889 [2024-10-01 13:34:22.739914] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:30.889 [2024-10-01 13:34:22.740026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61304 ] 00:06:30.889 { 00:06:30.889 "subsystems": [ 00:06:30.889 { 00:06:30.889 "subsystem": "bdev", 00:06:30.889 "config": [ 00:06:30.889 { 00:06:30.889 "params": { 00:06:30.889 "block_size": 4096, 00:06:30.889 "filename": "dd_sparse_aio_disk", 00:06:30.889 "name": "dd_aio" 00:06:30.889 }, 00:06:30.890 "method": "bdev_aio_create" 00:06:30.890 }, 00:06:30.890 { 00:06:30.890 "params": { 00:06:30.890 "lvs_name": "dd_lvstore", 00:06:30.890 "lvol_name": "dd_lvol", 00:06:30.890 "size_in_mib": 36, 00:06:30.890 "thin_provision": true 00:06:30.890 }, 00:06:30.890 "method": "bdev_lvol_create" 00:06:30.890 }, 00:06:30.890 { 00:06:30.890 "method": "bdev_wait_for_examine" 00:06:30.890 } 00:06:30.890 ] 00:06:30.890 } 00:06:30.890 ] 00:06:30.890 } 00:06:31.149 [2024-10-01 13:34:22.879793] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.149 [2024-10-01 13:34:22.949825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.149 [2024-10-01 13:34:22.983559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.407  Copying: 12/36 [MB] (average 545 MBps) 00:06:31.407 00:06:31.407 00:06:31.407 real 0m0.559s 00:06:31.407 user 0m0.381s 00:06:31.407 sys 0m0.245s 00:06:31.407 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.407 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:31.407 ************************************ 00:06:31.407 END TEST dd_sparse_file_to_bdev 00:06:31.407 ************************************ 00:06:31.667 13:34:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:31.667 13:34:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.667 13:34:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.668 13:34:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:31.668 ************************************ 00:06:31.668 START TEST dd_sparse_bdev_to_file 00:06:31.668 ************************************ 00:06:31.668 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:06:31.668 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:31.668 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:31.668 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:31.668 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:31.668 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:31.668 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:06:31.668 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:31.668 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:31.668 [2024-10-01 13:34:23.354064] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:31.668 [2024-10-01 13:34:23.354170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61337 ] 00:06:31.668 { 00:06:31.668 "subsystems": [ 00:06:31.668 { 00:06:31.668 "subsystem": "bdev", 00:06:31.668 "config": [ 00:06:31.668 { 00:06:31.668 "params": { 00:06:31.668 "block_size": 4096, 00:06:31.668 "filename": "dd_sparse_aio_disk", 00:06:31.668 "name": "dd_aio" 00:06:31.668 }, 00:06:31.668 "method": "bdev_aio_create" 00:06:31.668 }, 00:06:31.668 { 00:06:31.668 "method": "bdev_wait_for_examine" 00:06:31.668 } 00:06:31.668 ] 00:06:31.668 } 00:06:31.668 ] 00:06:31.668 } 00:06:31.668 [2024-10-01 13:34:23.495805] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.927 [2024-10-01 13:34:23.553335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.927 [2024-10-01 13:34:23.582180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.185  Copying: 12/36 [MB] (average 1000 MBps) 00:06:32.185 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:32.185 00:06:32.185 real 0m0.536s 00:06:32.185 user 0m0.340s 00:06:32.185 sys 0m0.245s 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:32.185 ************************************ 00:06:32.185 END TEST dd_sparse_bdev_to_file 00:06:32.185 ************************************ 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:06:32.185 00:06:32.185 real 0m2.107s 00:06:32.185 user 0m1.279s 00:06:32.185 sys 0m0.982s 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.185 13:34:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:32.185 ************************************ 00:06:32.185 END TEST spdk_dd_sparse 00:06:32.185 ************************************ 00:06:32.185 13:34:23 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:32.185 13:34:23 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.185 13:34:23 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.185 13:34:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:32.185 ************************************ 00:06:32.185 START TEST spdk_dd_negative 00:06:32.185 ************************************ 00:06:32.185 13:34:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:32.185 * Looking for test storage... 00:06:32.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:32.185 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:32.185 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:32.185 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:32.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.445 --rc genhtml_branch_coverage=1 00:06:32.445 --rc genhtml_function_coverage=1 00:06:32.445 --rc genhtml_legend=1 00:06:32.445 --rc geninfo_all_blocks=1 00:06:32.445 --rc geninfo_unexecuted_blocks=1 00:06:32.445 00:06:32.445 ' 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:32.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.445 --rc genhtml_branch_coverage=1 00:06:32.445 --rc genhtml_function_coverage=1 00:06:32.445 --rc genhtml_legend=1 00:06:32.445 --rc geninfo_all_blocks=1 00:06:32.445 --rc geninfo_unexecuted_blocks=1 00:06:32.445 00:06:32.445 ' 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:32.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.445 --rc genhtml_branch_coverage=1 00:06:32.445 --rc genhtml_function_coverage=1 00:06:32.445 --rc genhtml_legend=1 00:06:32.445 --rc geninfo_all_blocks=1 00:06:32.445 --rc geninfo_unexecuted_blocks=1 00:06:32.445 00:06:32.445 ' 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:32.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.445 --rc genhtml_branch_coverage=1 00:06:32.445 --rc genhtml_function_coverage=1 00:06:32.445 --rc genhtml_legend=1 00:06:32.445 --rc geninfo_all_blocks=1 00:06:32.445 --rc geninfo_unexecuted_blocks=1 00:06:32.445 00:06:32.445 ' 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:32.445 ************************************ 00:06:32.445 START TEST dd_invalid_arguments 00:06:32.445 ************************************ 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.445 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:32.445 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:32.445 00:06:32.445 CPU options: 00:06:32.445 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:32.445 (like [0,1,10]) 00:06:32.445 --lcores lcore to CPU mapping list. The list is in the format: 00:06:32.445 [<,lcores[@CPUs]>...] 00:06:32.445 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:32.445 Within the group, '-' is used for range separator, 00:06:32.446 ',' is used for single number separator. 00:06:32.446 '( )' can be omitted for single element group, 00:06:32.446 '@' can be omitted if cpus and lcores have the same value 00:06:32.446 --disable-cpumask-locks Disable CPU core lock files. 00:06:32.446 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:32.446 pollers in the app support interrupt mode) 00:06:32.446 -p, --main-core main (primary) core for DPDK 00:06:32.446 00:06:32.446 Configuration options: 00:06:32.446 -c, --config, --json JSON config file 00:06:32.446 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:32.446 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:32.446 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:32.446 --rpcs-allowed comma-separated list of permitted RPCS 00:06:32.446 --json-ignore-init-errors don't exit on invalid config entry 00:06:32.446 00:06:32.446 Memory options: 00:06:32.446 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:32.446 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:32.446 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:32.446 -R, --huge-unlink unlink huge files after initialization 00:06:32.446 -n, --mem-channels number of memory channels used for DPDK 00:06:32.446 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:32.446 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:32.446 --no-huge run without using hugepages 00:06:32.446 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:06:32.446 -i, --shm-id shared memory ID (optional) 00:06:32.446 -g, --single-file-segments force creating just one hugetlbfs file 00:06:32.446 00:06:32.446 PCI options: 00:06:32.446 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:32.446 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:32.446 -u, --no-pci disable PCI access 00:06:32.446 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:32.446 00:06:32.446 Log options: 00:06:32.446 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:06:32.446 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:06:32.446 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:06:32.446 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:06:32.446 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:06:32.446 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:06:32.446 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:06:32.446 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:06:32.446 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:06:32.446 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:06:32.446 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:06:32.446 --silence-noticelog disable notice level logging to stderr 00:06:32.446 00:06:32.446 Trace options: 00:06:32.446 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:32.446 setting 0 to disable trace (default 32768) 00:06:32.446 Tracepoints vary in size and can use more than one trace entry. 00:06:32.446 -e, --tpoint-group [:] 00:06:32.446 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:32.446 [2024-10-01 13:34:24.212956] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:06:32.446 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:06:32.446 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:06:32.446 bdev_raid, all). 00:06:32.446 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:32.446 a tracepoint group. First tpoint inside a group can be enabled by 00:06:32.446 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:32.446 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:32.446 in /include/spdk_internal/trace_defs.h 00:06:32.446 00:06:32.446 Other options: 00:06:32.446 -h, --help show this usage 00:06:32.446 -v, --version print SPDK version 00:06:32.446 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:32.446 --env-context Opaque context for use of the env implementation 00:06:32.446 00:06:32.446 Application specific: 00:06:32.446 [--------- DD Options ---------] 00:06:32.446 --if Input file. Must specify either --if or --ib. 00:06:32.446 --ib Input bdev. Must specifier either --if or --ib 00:06:32.446 --of Output file. Must specify either --of or --ob. 00:06:32.446 --ob Output bdev. Must specify either --of or --ob. 00:06:32.446 --iflag Input file flags. 00:06:32.446 --oflag Output file flags. 00:06:32.446 --bs I/O unit size (default: 4096) 00:06:32.446 --qd Queue depth (default: 2) 00:06:32.446 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:32.446 --skip Skip this many I/O units at start of input. (default: 0) 00:06:32.446 --seek Skip this many I/O units at start of output. (default: 0) 00:06:32.446 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:32.446 --sparse Enable hole skipping in input target 00:06:32.446 Available iflag and oflag values: 00:06:32.446 append - append mode 00:06:32.446 direct - use direct I/O for data 00:06:32.446 directory - fail unless a directory 00:06:32.446 dsync - use synchronized I/O for data 00:06:32.446 noatime - do not update access time 00:06:32.446 noctty - do not assign controlling terminal from file 00:06:32.446 nofollow - do not follow symlinks 00:06:32.446 nonblock - use non-blocking I/O 00:06:32.446 sync - use synchronized I/O for data and metadata 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.446 00:06:32.446 real 0m0.077s 00:06:32.446 user 0m0.049s 00:06:32.446 sys 0m0.028s 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:06:32.446 ************************************ 00:06:32.446 END TEST dd_invalid_arguments 00:06:32.446 ************************************ 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:32.446 ************************************ 00:06:32.446 START TEST dd_double_input 00:06:32.446 ************************************ 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.446 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:32.705 [2024-10-01 13:34:24.331397] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.705 00:06:32.705 real 0m0.073s 00:06:32.705 user 0m0.051s 00:06:32.705 sys 0m0.020s 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:06:32.705 ************************************ 00:06:32.705 END TEST dd_double_input 00:06:32.705 ************************************ 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:32.705 ************************************ 00:06:32.705 START TEST dd_double_output 00:06:32.705 ************************************ 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:32.705 [2024-10-01 13:34:24.453052] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:06:32.705 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.706 00:06:32.706 real 0m0.071s 00:06:32.706 user 0m0.040s 00:06:32.706 sys 0m0.027s 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.706 ************************************ 00:06:32.706 END TEST dd_double_output 00:06:32.706 ************************************ 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:32.706 ************************************ 00:06:32.706 START TEST dd_no_input 00:06:32.706 ************************************ 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.706 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:32.964 [2024-10-01 13:34:24.577990] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:06:32.964 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:06:32.964 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.964 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.964 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.964 00:06:32.964 real 0m0.075s 00:06:32.964 user 0m0.047s 00:06:32.964 sys 0m0.027s 00:06:32.964 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.964 13:34:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:06:32.964 ************************************ 00:06:32.964 END TEST dd_no_input 00:06:32.964 ************************************ 00:06:32.964 13:34:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:06:32.964 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.964 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.964 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:32.964 ************************************ 00:06:32.964 START TEST dd_no_output 00:06:32.964 ************************************ 00:06:32.964 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:06:32.964 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.964 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.965 [2024-10-01 13:34:24.704898] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.965 00:06:32.965 real 0m0.073s 00:06:32.965 user 0m0.045s 00:06:32.965 sys 0m0.026s 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:06:32.965 ************************************ 00:06:32.965 END TEST dd_no_output 00:06:32.965 ************************************ 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:32.965 ************************************ 00:06:32.965 START TEST dd_wrong_blocksize 00:06:32.965 ************************************ 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.965 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:33.224 [2024-10-01 13:34:24.831624] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:33.224 00:06:33.224 real 0m0.075s 00:06:33.224 user 0m0.048s 00:06:33.224 sys 0m0.026s 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.224 ************************************ 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 END TEST dd_wrong_blocksize 00:06:33.224 ************************************ 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 ************************************ 00:06:33.224 START TEST dd_smaller_blocksize 00:06:33.224 ************************************ 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:33.224 13:34:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:33.224 [2024-10-01 13:34:24.960922] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:33.224 [2024-10-01 13:34:24.960999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61563 ] 00:06:33.482 [2024-10-01 13:34:25.098050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.482 [2024-10-01 13:34:25.167271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.482 [2024-10-01 13:34:25.200238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.740 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:34.028 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:34.028 [2024-10-01 13:34:25.695000] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:34.028 [2024-10-01 13:34:25.695149] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.028 [2024-10-01 13:34:25.762239] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:34.028 13:34:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:06:34.028 13:34:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.028 13:34:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:06:34.028 13:34:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:06:34.028 13:34:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:06:34.028 13:34:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.028 00:06:34.028 real 0m0.948s 00:06:34.028 user 0m0.363s 00:06:34.028 sys 0m0.476s 00:06:34.028 13:34:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.028 ************************************ 00:06:34.028 13:34:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:34.028 END TEST dd_smaller_blocksize 00:06:34.028 ************************************ 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:34.288 ************************************ 00:06:34.288 START TEST dd_invalid_count 00:06:34.288 ************************************ 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:34.288 [2024-10-01 13:34:25.966647] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.288 00:06:34.288 real 0m0.077s 00:06:34.288 user 0m0.049s 00:06:34.288 sys 0m0.027s 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.288 13:34:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:06:34.288 ************************************ 00:06:34.288 END TEST dd_invalid_count 00:06:34.288 ************************************ 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:34.288 ************************************ 00:06:34.288 START TEST dd_invalid_oflag 00:06:34.288 ************************************ 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:34.288 [2024-10-01 13:34:26.088089] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.288 ************************************ 00:06:34.288 END TEST dd_invalid_oflag 00:06:34.288 ************************************ 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.288 00:06:34.288 real 0m0.064s 00:06:34.288 user 0m0.038s 00:06:34.288 sys 0m0.025s 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.288 13:34:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:34.549 ************************************ 00:06:34.549 START TEST dd_invalid_iflag 00:06:34.549 ************************************ 00:06:34.549 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:06:34.549 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:34.549 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:06:34.549 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:34.549 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.549 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.549 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.549 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.549 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.549 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:34.550 [2024-10-01 13:34:26.203811] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.550 00:06:34.550 real 0m0.065s 00:06:34.550 user 0m0.042s 00:06:34.550 sys 0m0.023s 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:06:34.550 ************************************ 00:06:34.550 END TEST dd_invalid_iflag 00:06:34.550 ************************************ 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:34.550 ************************************ 00:06:34.550 START TEST dd_unknown_flag 00:06:34.550 ************************************ 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.550 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:34.550 [2024-10-01 13:34:26.338315] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:34.550 [2024-10-01 13:34:26.338713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61655 ] 00:06:34.810 [2024-10-01 13:34:26.478088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.810 [2024-10-01 13:34:26.536680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.810 [2024-10-01 13:34:26.569038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.810 [2024-10-01 13:34:26.589328] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:34.810 [2024-10-01 13:34:26.589379] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.810 [2024-10-01 13:34:26.589476] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:34.810 [2024-10-01 13:34:26.589502] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.810 [2024-10-01 13:34:26.589762] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:06:34.810 [2024-10-01 13:34:26.589778] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.810 [2024-10-01 13:34:26.589841] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:34.810 [2024-10-01 13:34:26.589872] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:34.810 [2024-10-01 13:34:26.655186] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.070 00:06:35.070 real 0m0.474s 00:06:35.070 user 0m0.256s 00:06:35.070 sys 0m0.121s 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:06:35.070 ************************************ 00:06:35.070 END TEST dd_unknown_flag 00:06:35.070 ************************************ 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:35.070 ************************************ 00:06:35.070 START TEST dd_invalid_json 00:06:35.070 ************************************ 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:35.070 13:34:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:35.070 [2024-10-01 13:34:26.870768] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:35.070 [2024-10-01 13:34:26.870887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61689 ] 00:06:35.330 [2024-10-01 13:34:27.014281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.330 [2024-10-01 13:34:27.068143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.330 [2024-10-01 13:34:27.068221] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:06:35.330 [2024-10-01 13:34:27.068233] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:35.330 [2024-10-01 13:34:27.068240] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.330 [2024-10-01 13:34:27.068273] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:35.330 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:06:35.330 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.330 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:06:35.330 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:06:35.330 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:06:35.330 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.330 00:06:35.330 real 0m0.338s 00:06:35.330 user 0m0.171s 00:06:35.330 sys 0m0.065s 00:06:35.330 ************************************ 00:06:35.330 END TEST dd_invalid_json 00:06:35.330 ************************************ 00:06:35.330 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.330 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:06:35.330 13:34:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:06:35.330 13:34:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.330 13:34:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.330 13:34:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:35.590 ************************************ 00:06:35.590 START TEST dd_invalid_seek 00:06:35.590 ************************************ 00:06:35.590 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:06:35.590 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:35.590 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:35.590 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:06:35.590 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:35.590 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:35.590 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:06:35.590 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:35.590 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:06:35.590 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:35.590 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:06:35.590 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.590 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:06:35.590 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:35.590 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.591 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.591 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.591 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.591 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.591 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.591 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:35.591 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:35.591 [2024-10-01 13:34:27.252524] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:35.591 [2024-10-01 13:34:27.252605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61713 ] 00:06:35.591 { 00:06:35.591 "subsystems": [ 00:06:35.591 { 00:06:35.591 "subsystem": "bdev", 00:06:35.591 "config": [ 00:06:35.591 { 00:06:35.591 "params": { 00:06:35.591 "block_size": 512, 00:06:35.591 "num_blocks": 512, 00:06:35.591 "name": "malloc0" 00:06:35.591 }, 00:06:35.591 "method": "bdev_malloc_create" 00:06:35.591 }, 00:06:35.591 { 00:06:35.591 "params": { 00:06:35.591 "block_size": 512, 00:06:35.591 "num_blocks": 512, 00:06:35.591 "name": "malloc1" 00:06:35.591 }, 00:06:35.591 "method": "bdev_malloc_create" 00:06:35.591 }, 00:06:35.591 { 00:06:35.591 "method": "bdev_wait_for_examine" 00:06:35.591 } 00:06:35.591 ] 00:06:35.591 } 00:06:35.591 ] 00:06:35.591 } 00:06:35.591 [2024-10-01 13:34:27.385136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.591 [2024-10-01 13:34:27.440157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.850 [2024-10-01 13:34:27.470051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.850 [2024-10-01 13:34:27.514001] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:06:35.850 [2024-10-01 13:34:27.514061] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.850 [2024-10-01 13:34:27.576649] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:35.850 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:06:35.850 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.850 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:06:35.850 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:06:35.850 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:06:35.850 ************************************ 00:06:35.850 END TEST dd_invalid_seek 00:06:35.850 ************************************ 00:06:35.850 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.850 00:06:35.850 real 0m0.465s 00:06:35.850 user 0m0.307s 00:06:35.850 sys 0m0.115s 00:06:35.850 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.850 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:35.850 13:34:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:06:35.850 13:34:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.850 13:34:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.850 13:34:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:36.108 ************************************ 00:06:36.108 START TEST dd_invalid_skip 00:06:36.108 ************************************ 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.108 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.109 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.109 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:36.109 13:34:27 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:36.109 { 00:06:36.109 "subsystems": [ 00:06:36.109 { 00:06:36.109 "subsystem": "bdev", 00:06:36.109 "config": [ 00:06:36.109 { 00:06:36.109 "params": { 00:06:36.109 "block_size": 512, 00:06:36.109 "num_blocks": 512, 00:06:36.109 "name": "malloc0" 00:06:36.109 }, 00:06:36.109 "method": "bdev_malloc_create" 00:06:36.109 }, 00:06:36.109 { 00:06:36.109 "params": { 00:06:36.109 "block_size": 512, 00:06:36.109 "num_blocks": 512, 00:06:36.109 "name": "malloc1" 00:06:36.109 }, 00:06:36.109 "method": "bdev_malloc_create" 00:06:36.109 }, 00:06:36.109 { 00:06:36.109 "method": "bdev_wait_for_examine" 00:06:36.109 } 00:06:36.109 ] 00:06:36.109 } 00:06:36.109 ] 00:06:36.109 } 00:06:36.109 [2024-10-01 13:34:27.777615] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:36.109 [2024-10-01 13:34:27.777709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61752 ] 00:06:36.109 [2024-10-01 13:34:27.916644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.368 [2024-10-01 13:34:27.970380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.368 [2024-10-01 13:34:28.002600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.368 [2024-10-01 13:34:28.047427] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:06:36.368 [2024-10-01 13:34:28.047499] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.368 [2024-10-01 13:34:28.107927] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:36.368 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:06:36.368 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.368 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:06:36.368 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:06:36.368 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:06:36.368 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.368 00:06:36.368 real 0m0.476s 00:06:36.368 user 0m0.321s 00:06:36.368 sys 0m0.115s 00:06:36.368 ************************************ 00:06:36.368 END TEST dd_invalid_skip 00:06:36.368 ************************************ 00:06:36.368 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.368 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:36.628 ************************************ 00:06:36.628 START TEST dd_invalid_input_count 00:06:36.628 ************************************ 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:36.628 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:36.628 [2024-10-01 13:34:28.296890] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:36.628 [2024-10-01 13:34:28.297006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61780 ] 00:06:36.628 { 00:06:36.628 "subsystems": [ 00:06:36.628 { 00:06:36.628 "subsystem": "bdev", 00:06:36.628 "config": [ 00:06:36.628 { 00:06:36.628 "params": { 00:06:36.628 "block_size": 512, 00:06:36.628 "num_blocks": 512, 00:06:36.628 "name": "malloc0" 00:06:36.628 }, 00:06:36.628 "method": "bdev_malloc_create" 00:06:36.628 }, 00:06:36.628 { 00:06:36.628 "params": { 00:06:36.628 "block_size": 512, 00:06:36.628 "num_blocks": 512, 00:06:36.628 "name": "malloc1" 00:06:36.628 }, 00:06:36.628 "method": "bdev_malloc_create" 00:06:36.628 }, 00:06:36.628 { 00:06:36.628 "method": "bdev_wait_for_examine" 00:06:36.628 } 00:06:36.628 ] 00:06:36.628 } 00:06:36.628 ] 00:06:36.628 } 00:06:36.628 [2024-10-01 13:34:28.426631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.628 [2024-10-01 13:34:28.480920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.887 [2024-10-01 13:34:28.510392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.887 [2024-10-01 13:34:28.554166] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:06:36.887 [2024-10-01 13:34:28.554254] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.887 [2024-10-01 13:34:28.617203] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:36.887 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:06:36.887 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.887 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:06:36.887 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:06:36.887 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:06:36.887 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.887 ************************************ 00:06:36.887 END TEST dd_invalid_input_count 00:06:36.887 ************************************ 00:06:36.887 00:06:36.887 real 0m0.464s 00:06:36.887 user 0m0.309s 00:06:36.887 sys 0m0.116s 00:06:36.887 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.887 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:36.887 13:34:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:06:36.887 13:34:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.888 13:34:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.888 13:34:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:37.147 ************************************ 00:06:37.147 START TEST dd_invalid_output_count 00:06:37.147 ************************************ 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:37.147 13:34:28 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:37.147 { 00:06:37.147 "subsystems": [ 00:06:37.147 { 00:06:37.147 "subsystem": "bdev", 00:06:37.147 "config": [ 00:06:37.147 { 00:06:37.147 "params": { 00:06:37.147 "block_size": 512, 00:06:37.147 "num_blocks": 512, 00:06:37.147 "name": "malloc0" 00:06:37.147 }, 00:06:37.147 "method": "bdev_malloc_create" 00:06:37.147 }, 00:06:37.147 { 00:06:37.147 "method": "bdev_wait_for_examine" 00:06:37.147 } 00:06:37.147 ] 00:06:37.148 } 00:06:37.148 ] 00:06:37.148 } 00:06:37.148 [2024-10-01 13:34:28.815777] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:37.148 [2024-10-01 13:34:28.816011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61819 ] 00:06:37.148 [2024-10-01 13:34:28.951127] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.148 [2024-10-01 13:34:29.003208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.408 [2024-10-01 13:34:29.035028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.408 [2024-10-01 13:34:29.072320] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:06:37.408 [2024-10-01 13:34:29.072392] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.408 [2024-10-01 13:34:29.132146] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.408 00:06:37.408 real 0m0.454s 00:06:37.408 user 0m0.287s 00:06:37.408 sys 0m0.117s 00:06:37.408 ************************************ 00:06:37.408 END TEST dd_invalid_output_count 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:37.408 ************************************ 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:37.408 ************************************ 00:06:37.408 START TEST dd_bs_not_multiple 00:06:37.408 ************************************ 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:37.408 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:06:37.669 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:37.669 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:06:37.669 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:06:37.669 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:37.669 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.669 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:06:37.669 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:37.669 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.669 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.669 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.669 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.669 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.669 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.669 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:37.669 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:37.669 [2024-10-01 13:34:29.322887] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:37.669 [2024-10-01 13:34:29.322976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61845 ] 00:06:37.669 { 00:06:37.669 "subsystems": [ 00:06:37.669 { 00:06:37.669 "subsystem": "bdev", 00:06:37.669 "config": [ 00:06:37.669 { 00:06:37.669 "params": { 00:06:37.669 "block_size": 512, 00:06:37.669 "num_blocks": 512, 00:06:37.669 "name": "malloc0" 00:06:37.669 }, 00:06:37.669 "method": "bdev_malloc_create" 00:06:37.669 }, 00:06:37.669 { 00:06:37.669 "params": { 00:06:37.669 "block_size": 512, 00:06:37.669 "num_blocks": 512, 00:06:37.669 "name": "malloc1" 00:06:37.669 }, 00:06:37.669 "method": "bdev_malloc_create" 00:06:37.669 }, 00:06:37.669 { 00:06:37.669 "method": "bdev_wait_for_examine" 00:06:37.669 } 00:06:37.669 ] 00:06:37.669 } 00:06:37.669 ] 00:06:37.669 } 00:06:37.669 [2024-10-01 13:34:29.460696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.669 [2024-10-01 13:34:29.512979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.928 [2024-10-01 13:34:29.542428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.928 [2024-10-01 13:34:29.586266] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:06:37.928 [2024-10-01 13:34:29.586338] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.928 [2024-10-01 13:34:29.649425] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:37.928 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:06:37.928 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.928 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:06:37.928 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:06:37.928 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:06:37.928 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.928 00:06:37.928 real 0m0.467s 00:06:37.928 user 0m0.308s 00:06:37.928 sys 0m0.123s 00:06:37.928 ************************************ 00:06:37.928 END TEST dd_bs_not_multiple 00:06:37.928 ************************************ 00:06:37.928 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.928 13:34:29 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:37.928 ************************************ 00:06:37.928 END TEST spdk_dd_negative 00:06:37.928 ************************************ 00:06:37.928 00:06:37.928 real 0m5.827s 00:06:37.928 user 0m3.138s 00:06:37.928 sys 0m2.095s 00:06:37.928 13:34:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.928 13:34:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:38.188 ************************************ 00:06:38.188 END TEST spdk_dd 00:06:38.188 ************************************ 00:06:38.188 00:06:38.188 real 1m9.034s 00:06:38.188 user 0m44.934s 00:06:38.188 sys 0m28.312s 00:06:38.188 13:34:29 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.188 13:34:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:38.188 13:34:29 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:38.188 13:34:29 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:38.188 13:34:29 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:38.188 13:34:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:38.188 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:06:38.188 13:34:29 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:38.188 13:34:29 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:38.188 13:34:29 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:38.188 13:34:29 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:38.188 13:34:29 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:38.188 13:34:29 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:38.188 13:34:29 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:38.188 13:34:29 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:38.188 13:34:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.188 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:06:38.188 ************************************ 00:06:38.188 START TEST nvmf_tcp 00:06:38.188 ************************************ 00:06:38.188 13:34:29 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:38.188 * Looking for test storage... 00:06:38.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:38.188 13:34:29 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.188 13:34:29 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.188 13:34:29 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.447 13:34:30 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.447 13:34:30 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:38.447 13:34:30 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.447 13:34:30 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.447 --rc genhtml_branch_coverage=1 00:06:38.447 --rc genhtml_function_coverage=1 00:06:38.447 --rc genhtml_legend=1 00:06:38.447 --rc geninfo_all_blocks=1 00:06:38.447 --rc geninfo_unexecuted_blocks=1 00:06:38.447 00:06:38.447 ' 00:06:38.447 13:34:30 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.447 --rc genhtml_branch_coverage=1 00:06:38.447 --rc genhtml_function_coverage=1 00:06:38.447 --rc genhtml_legend=1 00:06:38.447 --rc geninfo_all_blocks=1 00:06:38.447 --rc geninfo_unexecuted_blocks=1 00:06:38.447 00:06:38.447 ' 00:06:38.447 13:34:30 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.447 --rc genhtml_branch_coverage=1 00:06:38.447 --rc genhtml_function_coverage=1 00:06:38.447 --rc genhtml_legend=1 00:06:38.447 --rc geninfo_all_blocks=1 00:06:38.447 --rc geninfo_unexecuted_blocks=1 00:06:38.447 00:06:38.447 ' 00:06:38.447 13:34:30 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.447 --rc genhtml_branch_coverage=1 00:06:38.447 --rc genhtml_function_coverage=1 00:06:38.447 --rc genhtml_legend=1 00:06:38.447 --rc geninfo_all_blocks=1 00:06:38.447 --rc geninfo_unexecuted_blocks=1 00:06:38.447 00:06:38.448 ' 00:06:38.448 13:34:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:38.448 13:34:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:38.448 13:34:30 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:38.448 13:34:30 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:38.448 13:34:30 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.448 13:34:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.448 ************************************ 00:06:38.448 START TEST nvmf_target_core 00:06:38.448 ************************************ 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:38.448 * Looking for test storage... 00:06:38.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.448 --rc genhtml_branch_coverage=1 00:06:38.448 --rc genhtml_function_coverage=1 00:06:38.448 --rc genhtml_legend=1 00:06:38.448 --rc geninfo_all_blocks=1 00:06:38.448 --rc geninfo_unexecuted_blocks=1 00:06:38.448 00:06:38.448 ' 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.448 --rc genhtml_branch_coverage=1 00:06:38.448 --rc genhtml_function_coverage=1 00:06:38.448 --rc genhtml_legend=1 00:06:38.448 --rc geninfo_all_blocks=1 00:06:38.448 --rc geninfo_unexecuted_blocks=1 00:06:38.448 00:06:38.448 ' 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.448 --rc genhtml_branch_coverage=1 00:06:38.448 --rc genhtml_function_coverage=1 00:06:38.448 --rc genhtml_legend=1 00:06:38.448 --rc geninfo_all_blocks=1 00:06:38.448 --rc geninfo_unexecuted_blocks=1 00:06:38.448 00:06:38.448 ' 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.448 --rc genhtml_branch_coverage=1 00:06:38.448 --rc genhtml_function_coverage=1 00:06:38.448 --rc genhtml_legend=1 00:06:38.448 --rc geninfo_all_blocks=1 00:06:38.448 --rc geninfo_unexecuted_blocks=1 00:06:38.448 00:06:38.448 ' 00:06:38.448 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.707 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:38.707 13:34:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:38.708 ************************************ 00:06:38.708 START TEST nvmf_host_management 00:06:38.708 ************************************ 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:38.708 * Looking for test storage... 00:06:38.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.708 --rc genhtml_branch_coverage=1 00:06:38.708 --rc genhtml_function_coverage=1 00:06:38.708 --rc genhtml_legend=1 00:06:38.708 --rc geninfo_all_blocks=1 00:06:38.708 --rc geninfo_unexecuted_blocks=1 00:06:38.708 00:06:38.708 ' 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.708 --rc genhtml_branch_coverage=1 00:06:38.708 --rc genhtml_function_coverage=1 00:06:38.708 --rc genhtml_legend=1 00:06:38.708 --rc geninfo_all_blocks=1 00:06:38.708 --rc geninfo_unexecuted_blocks=1 00:06:38.708 00:06:38.708 ' 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.708 --rc genhtml_branch_coverage=1 00:06:38.708 --rc genhtml_function_coverage=1 00:06:38.708 --rc genhtml_legend=1 00:06:38.708 --rc geninfo_all_blocks=1 00:06:38.708 --rc geninfo_unexecuted_blocks=1 00:06:38.708 00:06:38.708 ' 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.708 --rc genhtml_branch_coverage=1 00:06:38.708 --rc genhtml_function_coverage=1 00:06:38.708 --rc genhtml_legend=1 00:06:38.708 --rc geninfo_all_blocks=1 00:06:38.708 --rc geninfo_unexecuted_blocks=1 00:06:38.708 00:06:38.708 ' 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.708 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.971 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:38.971 Cannot find device "nvmf_init_br" 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:38.971 Cannot find device "nvmf_init_br2" 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:38.971 Cannot find device "nvmf_tgt_br" 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:38.971 Cannot find device "nvmf_tgt_br2" 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:38.971 Cannot find device "nvmf_init_br" 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:38.971 Cannot find device "nvmf_init_br2" 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:38.971 Cannot find device "nvmf_tgt_br" 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:38.971 Cannot find device "nvmf_tgt_br2" 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:38.971 Cannot find device "nvmf_br" 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:38.971 Cannot find device "nvmf_init_if" 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:38.971 Cannot find device "nvmf_init_if2" 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:38.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:06:38.971 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:38.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:38.972 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:06:38.972 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:38.972 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:38.972 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:38.972 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:38.972 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:38.972 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:38.972 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:38.972 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:38.972 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:38.972 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:38.972 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:39.283 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:39.283 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:39.283 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:06:39.283 00:06:39.283 --- 10.0.0.3 ping statistics --- 00:06:39.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.283 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:39.283 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:39.283 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:06:39.283 00:06:39.283 --- 10.0.0.4 ping statistics --- 00:06:39.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.283 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:39.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:39.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:06:39.283 00:06:39.283 --- 10.0.0.1 ping statistics --- 00:06:39.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.283 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:39.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:39.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:06:39.283 00:06:39.283 --- 10.0.0.2 ping statistics --- 00:06:39.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.283 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=62184 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 62184 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62184 ']' 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.283 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.542 [2024-10-01 13:34:31.157583] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:39.542 [2024-10-01 13:34:31.157865] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.542 [2024-10-01 13:34:31.300495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.542 [2024-10-01 13:34:31.375779] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.542 [2024-10-01 13:34:31.376089] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.542 [2024-10-01 13:34:31.376283] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.542 [2024-10-01 13:34:31.376429] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.542 [2024-10-01 13:34:31.376482] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.542 [2024-10-01 13:34:31.376769] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.542 [2024-10-01 13:34:31.376834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.542 [2024-10-01 13:34:31.376944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:39.542 [2024-10-01 13:34:31.376952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.801 [2024-10-01 13:34:31.412563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.801 [2024-10-01 13:34:31.528391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.801 Malloc0 00:06:39.801 [2024-10-01 13:34:31.589975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62231 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62231 /var/tmp/bdevperf.sock 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62231 ']' 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:39.801 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:06:39.802 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.802 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:06:39.802 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.802 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:06:39.802 { 00:06:39.802 "params": { 00:06:39.802 "name": "Nvme$subsystem", 00:06:39.802 "trtype": "$TEST_TRANSPORT", 00:06:39.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:39.802 "adrfam": "ipv4", 00:06:39.802 "trsvcid": "$NVMF_PORT", 00:06:39.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:39.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:39.802 "hdgst": ${hdgst:-false}, 00:06:39.802 "ddgst": ${ddgst:-false} 00:06:39.802 }, 00:06:39.802 "method": "bdev_nvme_attach_controller" 00:06:39.802 } 00:06:39.802 EOF 00:06:39.802 )") 00:06:39.802 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:06:39.802 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:06:39.802 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:06:39.802 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:06:39.802 "params": { 00:06:39.802 "name": "Nvme0", 00:06:39.802 "trtype": "tcp", 00:06:39.802 "traddr": "10.0.0.3", 00:06:39.802 "adrfam": "ipv4", 00:06:39.802 "trsvcid": "4420", 00:06:39.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:39.802 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:39.802 "hdgst": false, 00:06:39.802 "ddgst": false 00:06:39.802 }, 00:06:39.802 "method": "bdev_nvme_attach_controller" 00:06:39.802 }' 00:06:40.060 [2024-10-01 13:34:31.693984] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:40.060 [2024-10-01 13:34:31.694651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62231 ] 00:06:40.060 [2024-10-01 13:34:31.834634] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.060 [2024-10-01 13:34:31.903264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.319 [2024-10-01 13:34:31.944095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.319 Running I/O for 10 seconds... 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.319 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.577 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:40.578 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:40.578 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.838 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.838 [2024-10-01 13:34:32.512968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.838 [2024-10-01 13:34:32.513027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.838 [2024-10-01 13:34:32.513056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.838 [2024-10-01 13:34:32.513069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.839 [2024-10-01 13:34:32.513977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.839 [2024-10-01 13:34:32.513986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.513997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.840 [2024-10-01 13:34:32.514464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba56b0 is same with the state(6) to be set 00:06:40.840 [2024-10-01 13:34:32.514522] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xba56b0 was disconnected and freed. reset controller. 00:06:40.840 [2024-10-01 13:34:32.514632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.840 [2024-10-01 13:34:32.514650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.840 [2024-10-01 13:34:32.514671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.840 [2024-10-01 13:34:32.514690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.840 [2024-10-01 13:34:32.514709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.840 [2024-10-01 13:34:32.514720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5b20 is same with the state(6) to be set 00:06:40.840 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.840 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:40.840 [2024-10-01 13:34:32.515862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:40.840 task offset: 81920 on job bdev=Nvme0n1 fails 00:06:40.840 00:06:40.840 Latency(us) 00:06:40.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:40.840 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:40.840 Job: Nvme0n1 ended in about 0.46 seconds with error 00:06:40.840 Verification LBA range: start 0x0 length 0x400 00:06:40.840 Nvme0n1 : 0.46 1391.21 86.95 139.12 0.00 40211.39 2219.29 45756.04 00:06:40.840 =================================================================================================================== 00:06:40.840 Total : 1391.21 86.95 139.12 0.00 40211.39 2219.29 45756.04 00:06:40.840 [2024-10-01 13:34:32.517833] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.840 [2024-10-01 13:34:32.517856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba5b20 (9): Bad file descriptor 00:06:40.840 [2024-10-01 13:34:32.522387] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:41.775 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62231 00:06:41.775 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62231) - No such process 00:06:41.775 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:41.776 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:41.776 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:41.776 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:41.776 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:06:41.776 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:06:41.776 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:06:41.776 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:06:41.776 { 00:06:41.776 "params": { 00:06:41.776 "name": "Nvme$subsystem", 00:06:41.776 "trtype": "$TEST_TRANSPORT", 00:06:41.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:41.776 "adrfam": "ipv4", 00:06:41.776 "trsvcid": "$NVMF_PORT", 00:06:41.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:41.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:41.776 "hdgst": ${hdgst:-false}, 00:06:41.776 "ddgst": ${ddgst:-false} 00:06:41.776 }, 00:06:41.776 "method": "bdev_nvme_attach_controller" 00:06:41.776 } 00:06:41.776 EOF 00:06:41.776 )") 00:06:41.776 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:06:41.776 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:06:41.776 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:06:41.776 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:06:41.776 "params": { 00:06:41.776 "name": "Nvme0", 00:06:41.776 "trtype": "tcp", 00:06:41.776 "traddr": "10.0.0.3", 00:06:41.776 "adrfam": "ipv4", 00:06:41.776 "trsvcid": "4420", 00:06:41.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:41.776 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:41.776 "hdgst": false, 00:06:41.776 "ddgst": false 00:06:41.776 }, 00:06:41.776 "method": "bdev_nvme_attach_controller" 00:06:41.776 }' 00:06:41.776 [2024-10-01 13:34:33.581860] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:41.776 [2024-10-01 13:34:33.581963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62271 ] 00:06:42.034 [2024-10-01 13:34:33.722386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.034 [2024-10-01 13:34:33.781353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.034 [2024-10-01 13:34:33.819049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.293 Running I/O for 1 seconds... 00:06:43.228 1472.00 IOPS, 92.00 MiB/s 00:06:43.228 Latency(us) 00:06:43.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.228 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:43.228 Verification LBA range: start 0x0 length 0x400 00:06:43.228 Nvme0n1 : 1.03 1492.10 93.26 0.00 0.00 42041.11 3961.95 37891.72 00:06:43.228 =================================================================================================================== 00:06:43.228 Total : 1492.10 93.26 0.00 0.00 42041.11 3961.95 37891.72 00:06:43.486 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:43.486 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:43.486 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:06:43.486 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:43.486 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:43.486 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:43.486 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:43.486 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:43.486 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:43.486 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:43.486 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:43.486 rmmod nvme_tcp 00:06:43.487 rmmod nvme_fabrics 00:06:43.487 rmmod nvme_keyring 00:06:43.487 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:43.487 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:43.487 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:43.487 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 62184 ']' 00:06:43.487 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 62184 00:06:43.487 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 62184 ']' 00:06:43.487 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 62184 00:06:43.487 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:43.487 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.487 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62184 00:06:43.487 killing process with pid 62184 00:06:43.487 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:43.487 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:43.487 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62184' 00:06:43.487 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 62184 00:06:43.487 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 62184 00:06:43.745 [2024-10-01 13:34:35.427464] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:43.745 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:44.003 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:44.003 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:44.003 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:44.003 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:44.003 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.003 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.003 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.003 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:06:44.003 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:44.003 00:06:44.003 real 0m5.364s 00:06:44.003 user 0m18.618s 00:06:44.003 sys 0m1.384s 00:06:44.003 ************************************ 00:06:44.003 END TEST nvmf_host_management 00:06:44.003 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.003 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:44.003 ************************************ 00:06:44.003 13:34:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:44.003 13:34:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:44.004 13:34:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.004 13:34:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:44.004 ************************************ 00:06:44.004 START TEST nvmf_lvol 00:06:44.004 ************************************ 00:06:44.004 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:44.004 * Looking for test storage... 00:06:44.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:44.004 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:44.004 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:06:44.004 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:44.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.263 --rc genhtml_branch_coverage=1 00:06:44.263 --rc genhtml_function_coverage=1 00:06:44.263 --rc genhtml_legend=1 00:06:44.263 --rc geninfo_all_blocks=1 00:06:44.263 --rc geninfo_unexecuted_blocks=1 00:06:44.263 00:06:44.263 ' 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:44.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.263 --rc genhtml_branch_coverage=1 00:06:44.263 --rc genhtml_function_coverage=1 00:06:44.263 --rc genhtml_legend=1 00:06:44.263 --rc geninfo_all_blocks=1 00:06:44.263 --rc geninfo_unexecuted_blocks=1 00:06:44.263 00:06:44.263 ' 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:44.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.263 --rc genhtml_branch_coverage=1 00:06:44.263 --rc genhtml_function_coverage=1 00:06:44.263 --rc genhtml_legend=1 00:06:44.263 --rc geninfo_all_blocks=1 00:06:44.263 --rc geninfo_unexecuted_blocks=1 00:06:44.263 00:06:44.263 ' 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:44.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.263 --rc genhtml_branch_coverage=1 00:06:44.263 --rc genhtml_function_coverage=1 00:06:44.263 --rc genhtml_legend=1 00:06:44.263 --rc geninfo_all_blocks=1 00:06:44.263 --rc geninfo_unexecuted_blocks=1 00:06:44.263 00:06:44.263 ' 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.263 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.264 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:44.264 Cannot find device "nvmf_init_br" 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:06:44.264 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:44.264 Cannot find device "nvmf_init_br2" 00:06:44.264 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:06:44.264 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:44.264 Cannot find device "nvmf_tgt_br" 00:06:44.264 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:06:44.264 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:44.264 Cannot find device "nvmf_tgt_br2" 00:06:44.264 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:06:44.264 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:44.264 Cannot find device "nvmf_init_br" 00:06:44.264 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:06:44.264 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:44.264 Cannot find device "nvmf_init_br2" 00:06:44.264 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:06:44.264 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:44.264 Cannot find device "nvmf_tgt_br" 00:06:44.264 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:06:44.265 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:44.265 Cannot find device "nvmf_tgt_br2" 00:06:44.265 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:06:44.265 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:44.265 Cannot find device "nvmf_br" 00:06:44.265 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:06:44.265 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:44.265 Cannot find device "nvmf_init_if" 00:06:44.265 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:06:44.265 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:44.265 Cannot find device "nvmf_init_if2" 00:06:44.265 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:06:44.265 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:44.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:44.265 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:06:44.265 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:44.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:44.265 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:06:44.265 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:44.265 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:44.523 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:44.523 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:06:44.523 00:06:44.523 --- 10.0.0.3 ping statistics --- 00:06:44.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.523 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:44.523 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:44.523 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:06:44.523 00:06:44.523 --- 10.0.0.4 ping statistics --- 00:06:44.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.523 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:44.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:06:44.523 00:06:44.523 --- 10.0.0.1 ping statistics --- 00:06:44.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.523 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:44.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:06:44.523 00:06:44.523 --- 10.0.0.2 ping statistics --- 00:06:44.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.523 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:44.523 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=62532 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 62532 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 62532 ']' 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.781 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.781 [2024-10-01 13:34:36.476450] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:06:44.782 [2024-10-01 13:34:36.476584] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.782 [2024-10-01 13:34:36.617650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.040 [2024-10-01 13:34:36.675547] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.040 [2024-10-01 13:34:36.675652] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.040 [2024-10-01 13:34:36.675664] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.040 [2024-10-01 13:34:36.675672] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.040 [2024-10-01 13:34:36.675680] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.040 [2024-10-01 13:34:36.677460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.040 [2024-10-01 13:34:36.677836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.040 [2024-10-01 13:34:36.677852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.040 [2024-10-01 13:34:36.708300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.974 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.974 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:45.974 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:45.974 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:45.974 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:45.974 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.974 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:46.233 [2024-10-01 13:34:37.864480] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.233 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:46.491 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:46.491 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:46.750 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:46.750 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:47.008 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:47.317 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a6954989-82ad-4447-b49a-53afc254a1be 00:06:47.317 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a6954989-82ad-4447-b49a-53afc254a1be lvol 20 00:06:47.583 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=661703e1-c927-4dff-a52a-3afdfbd636a0 00:06:47.583 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:48.151 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 661703e1-c927-4dff-a52a-3afdfbd636a0 00:06:48.151 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:48.409 [2024-10-01 13:34:40.249689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:48.409 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:48.975 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62613 00:06:48.975 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:48.975 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:49.910 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 661703e1-c927-4dff-a52a-3afdfbd636a0 MY_SNAPSHOT 00:06:50.168 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=be7d5cd7-f85d-40b1-accf-fe23e244a2fb 00:06:50.168 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 661703e1-c927-4dff-a52a-3afdfbd636a0 30 00:06:50.427 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone be7d5cd7-f85d-40b1-accf-fe23e244a2fb MY_CLONE 00:06:50.685 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=16432ed9-beca-4524-b4b8-663b4778fd7e 00:06:50.685 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 16432ed9-beca-4524-b4b8-663b4778fd7e 00:06:51.249 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62613 00:06:59.356 Initializing NVMe Controllers 00:06:59.356 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:06:59.356 Controller IO queue size 128, less than required. 00:06:59.356 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:59.356 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:59.356 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:59.356 Initialization complete. Launching workers. 00:06:59.356 ======================================================== 00:06:59.356 Latency(us) 00:06:59.356 Device Information : IOPS MiB/s Average min max 00:06:59.356 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10151.20 39.65 12620.62 2240.09 67186.94 00:06:59.356 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10210.30 39.88 12548.83 2493.22 73870.27 00:06:59.356 ======================================================== 00:06:59.356 Total : 20361.50 79.54 12584.62 2240.09 73870.27 00:06:59.356 00:06:59.356 13:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:59.356 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 661703e1-c927-4dff-a52a-3afdfbd636a0 00:06:59.615 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a6954989-82ad-4447-b49a-53afc254a1be 00:06:59.874 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:59.874 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:59.874 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:59.874 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:59.874 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:00.133 rmmod nvme_tcp 00:07:00.133 rmmod nvme_fabrics 00:07:00.133 rmmod nvme_keyring 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 62532 ']' 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 62532 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 62532 ']' 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 62532 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62532 00:07:00.133 killing process with pid 62532 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62532' 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 62532 00:07:00.133 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 62532 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:00.392 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:00.651 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:00.651 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.651 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.651 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.651 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:00.651 00:07:00.651 real 0m16.525s 00:07:00.651 user 1m7.506s 00:07:00.651 sys 0m4.227s 00:07:00.651 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.651 ************************************ 00:07:00.651 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:00.651 END TEST nvmf_lvol 00:07:00.651 ************************************ 00:07:00.651 13:34:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:00.651 13:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:00.651 13:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.651 13:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:00.651 ************************************ 00:07:00.651 START TEST nvmf_lvs_grow 00:07:00.651 ************************************ 00:07:00.651 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:00.651 * Looking for test storage... 00:07:00.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:00.651 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:00.651 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:07:00.651 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:00.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.909 --rc genhtml_branch_coverage=1 00:07:00.909 --rc genhtml_function_coverage=1 00:07:00.909 --rc genhtml_legend=1 00:07:00.909 --rc geninfo_all_blocks=1 00:07:00.909 --rc geninfo_unexecuted_blocks=1 00:07:00.909 00:07:00.909 ' 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:00.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.909 --rc genhtml_branch_coverage=1 00:07:00.909 --rc genhtml_function_coverage=1 00:07:00.909 --rc genhtml_legend=1 00:07:00.909 --rc geninfo_all_blocks=1 00:07:00.909 --rc geninfo_unexecuted_blocks=1 00:07:00.909 00:07:00.909 ' 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:00.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.909 --rc genhtml_branch_coverage=1 00:07:00.909 --rc genhtml_function_coverage=1 00:07:00.909 --rc genhtml_legend=1 00:07:00.909 --rc geninfo_all_blocks=1 00:07:00.909 --rc geninfo_unexecuted_blocks=1 00:07:00.909 00:07:00.909 ' 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:00.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.909 --rc genhtml_branch_coverage=1 00:07:00.909 --rc genhtml_function_coverage=1 00:07:00.909 --rc genhtml_legend=1 00:07:00.909 --rc geninfo_all_blocks=1 00:07:00.909 --rc geninfo_unexecuted_blocks=1 00:07:00.909 00:07:00.909 ' 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.909 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.910 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:00.910 Cannot find device "nvmf_init_br" 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:00.910 Cannot find device "nvmf_init_br2" 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:00.910 Cannot find device "nvmf_tgt_br" 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:00.910 Cannot find device "nvmf_tgt_br2" 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:00.910 Cannot find device "nvmf_init_br" 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:00.910 Cannot find device "nvmf_init_br2" 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:00.910 Cannot find device "nvmf_tgt_br" 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:00.910 Cannot find device "nvmf_tgt_br2" 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:00.910 Cannot find device "nvmf_br" 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:00.910 Cannot find device "nvmf_init_if" 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:00.910 Cannot find device "nvmf_init_if2" 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:00.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:00.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:00.910 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:01.169 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:01.169 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:01.169 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:07:01.170 00:07:01.170 --- 10.0.0.3 ping statistics --- 00:07:01.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.170 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:01.170 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:01.170 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:01.170 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:07:01.170 00:07:01.170 --- 10.0.0.4 ping statistics --- 00:07:01.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.170 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:01.170 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:01.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:01.170 00:07:01.170 --- 10.0.0.1 ping statistics --- 00:07:01.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.170 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:01.170 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:01.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:07:01.170 00:07:01.170 --- 10.0.0.2 ping statistics --- 00:07:01.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.170 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:07:01.170 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.170 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:07:01.170 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:01.170 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.170 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:01.170 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:01.170 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.170 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:01.170 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:01.170 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:01.170 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:01.170 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:01.170 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:01.170 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=62996 00:07:01.170 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:01.170 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 62996 00:07:01.170 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 62996 ']' 00:07:01.170 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.170 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.170 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.170 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.429 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:01.429 [2024-10-01 13:34:53.078227] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:07:01.429 [2024-10-01 13:34:53.078296] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.429 [2024-10-01 13:34:53.212166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.429 [2024-10-01 13:34:53.266749] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.429 [2024-10-01 13:34:53.266814] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.429 [2024-10-01 13:34:53.266825] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.429 [2024-10-01 13:34:53.266834] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.429 [2024-10-01 13:34:53.266841] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.429 [2024-10-01 13:34:53.266867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.688 [2024-10-01 13:34:53.297165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.255 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.255 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:02.255 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:02.255 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:02.256 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.256 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.256 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:02.514 [2024-10-01 13:34:54.339727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.514 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:02.514 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.514 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.514 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.514 ************************************ 00:07:02.514 START TEST lvs_grow_clean 00:07:02.514 ************************************ 00:07:02.514 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:02.514 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:02.514 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:02.514 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:02.514 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:02.514 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:02.514 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:02.514 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:02.514 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:02.772 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:03.031 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:03.031 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:03.289 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4967c089-8a06-49b2-ba9a-247610ae2a98 00:07:03.289 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:03.289 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4967c089-8a06-49b2-ba9a-247610ae2a98 00:07:03.548 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:03.548 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:03.548 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4967c089-8a06-49b2-ba9a-247610ae2a98 lvol 150 00:07:03.807 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4bb336e1-ed9e-40f7-91c0-311543f086d8 00:07:03.807 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:03.807 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:04.065 [2024-10-01 13:34:55.775423] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:04.065 [2024-10-01 13:34:55.775501] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:04.065 true 00:07:04.065 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4967c089-8a06-49b2-ba9a-247610ae2a98 00:07:04.065 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:04.323 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:04.323 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:04.581 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4bb336e1-ed9e-40f7-91c0-311543f086d8 00:07:04.840 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:05.098 [2024-10-01 13:34:56.836392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:05.098 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:05.357 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63079 00:07:05.357 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:05.357 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:05.357 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63079 /var/tmp/bdevperf.sock 00:07:05.357 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 63079 ']' 00:07:05.357 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:05.357 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:05.357 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:05.357 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.357 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:05.357 [2024-10-01 13:34:57.175519] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:07:05.357 [2024-10-01 13:34:57.175645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63079 ] 00:07:05.616 [2024-10-01 13:34:57.314030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.616 [2024-10-01 13:34:57.385073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.616 [2024-10-01 13:34:57.418970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.553 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.553 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:06.553 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:06.811 Nvme0n1 00:07:06.811 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:07.070 [ 00:07:07.070 { 00:07:07.070 "name": "Nvme0n1", 00:07:07.070 "aliases": [ 00:07:07.070 "4bb336e1-ed9e-40f7-91c0-311543f086d8" 00:07:07.070 ], 00:07:07.070 "product_name": "NVMe disk", 00:07:07.070 "block_size": 4096, 00:07:07.070 "num_blocks": 38912, 00:07:07.070 "uuid": "4bb336e1-ed9e-40f7-91c0-311543f086d8", 00:07:07.070 "numa_id": -1, 00:07:07.070 "assigned_rate_limits": { 00:07:07.070 "rw_ios_per_sec": 0, 00:07:07.070 "rw_mbytes_per_sec": 0, 00:07:07.070 "r_mbytes_per_sec": 0, 00:07:07.070 "w_mbytes_per_sec": 0 00:07:07.070 }, 00:07:07.070 "claimed": false, 00:07:07.070 "zoned": false, 00:07:07.070 "supported_io_types": { 00:07:07.070 "read": true, 00:07:07.070 "write": true, 00:07:07.070 "unmap": true, 00:07:07.070 "flush": true, 00:07:07.070 "reset": true, 00:07:07.070 "nvme_admin": true, 00:07:07.070 "nvme_io": true, 00:07:07.070 "nvme_io_md": false, 00:07:07.070 "write_zeroes": true, 00:07:07.070 "zcopy": false, 00:07:07.070 "get_zone_info": false, 00:07:07.070 "zone_management": false, 00:07:07.070 "zone_append": false, 00:07:07.070 "compare": true, 00:07:07.070 "compare_and_write": true, 00:07:07.070 "abort": true, 00:07:07.070 "seek_hole": false, 00:07:07.070 "seek_data": false, 00:07:07.070 "copy": true, 00:07:07.070 "nvme_iov_md": false 00:07:07.070 }, 00:07:07.070 "memory_domains": [ 00:07:07.070 { 00:07:07.070 "dma_device_id": "system", 00:07:07.070 "dma_device_type": 1 00:07:07.070 } 00:07:07.070 ], 00:07:07.070 "driver_specific": { 00:07:07.070 "nvme": [ 00:07:07.070 { 00:07:07.070 "trid": { 00:07:07.070 "trtype": "TCP", 00:07:07.070 "adrfam": "IPv4", 00:07:07.070 "traddr": "10.0.0.3", 00:07:07.070 "trsvcid": "4420", 00:07:07.070 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:07.070 }, 00:07:07.070 "ctrlr_data": { 00:07:07.070 "cntlid": 1, 00:07:07.070 "vendor_id": "0x8086", 00:07:07.070 "model_number": "SPDK bdev Controller", 00:07:07.070 "serial_number": "SPDK0", 00:07:07.070 "firmware_revision": "25.01", 00:07:07.070 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:07.070 "oacs": { 00:07:07.070 "security": 0, 00:07:07.070 "format": 0, 00:07:07.070 "firmware": 0, 00:07:07.070 "ns_manage": 0 00:07:07.070 }, 00:07:07.070 "multi_ctrlr": true, 00:07:07.070 "ana_reporting": false 00:07:07.070 }, 00:07:07.070 "vs": { 00:07:07.070 "nvme_version": "1.3" 00:07:07.070 }, 00:07:07.070 "ns_data": { 00:07:07.070 "id": 1, 00:07:07.070 "can_share": true 00:07:07.070 } 00:07:07.070 } 00:07:07.070 ], 00:07:07.070 "mp_policy": "active_passive" 00:07:07.070 } 00:07:07.070 } 00:07:07.070 ] 00:07:07.070 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:07.070 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63102 00:07:07.070 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:07.070 Running I/O for 10 seconds... 00:07:08.450 Latency(us) 00:07:08.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.450 Nvme0n1 : 1.00 6917.00 27.02 0.00 0.00 0.00 0.00 0.00 00:07:08.450 =================================================================================================================== 00:07:08.450 Total : 6917.00 27.02 0.00 0.00 0.00 0.00 0.00 00:07:08.450 00:07:09.016 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4967c089-8a06-49b2-ba9a-247610ae2a98 00:07:09.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.274 Nvme0n1 : 2.00 6760.50 26.41 0.00 0.00 0.00 0.00 0.00 00:07:09.274 =================================================================================================================== 00:07:09.274 Total : 6760.50 26.41 0.00 0.00 0.00 0.00 0.00 00:07:09.274 00:07:09.274 true 00:07:09.274 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:09.274 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4967c089-8a06-49b2-ba9a-247610ae2a98 00:07:09.840 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:09.840 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:09.840 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63102 00:07:10.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.098 Nvme0n1 : 3.00 6750.67 26.37 0.00 0.00 0.00 0.00 0.00 00:07:10.098 =================================================================================================================== 00:07:10.098 Total : 6750.67 26.37 0.00 0.00 0.00 0.00 0.00 00:07:10.098 00:07:11.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.058 Nvme0n1 : 4.00 6678.75 26.09 0.00 0.00 0.00 0.00 0.00 00:07:11.058 =================================================================================================================== 00:07:11.058 Total : 6678.75 26.09 0.00 0.00 0.00 0.00 0.00 00:07:11.058 00:07:12.432 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.432 Nvme0n1 : 5.00 6663.80 26.03 0.00 0.00 0.00 0.00 0.00 00:07:12.432 =================================================================================================================== 00:07:12.432 Total : 6663.80 26.03 0.00 0.00 0.00 0.00 0.00 00:07:12.432 00:07:13.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.367 Nvme0n1 : 6.00 6653.83 25.99 0.00 0.00 0.00 0.00 0.00 00:07:13.367 =================================================================================================================== 00:07:13.367 Total : 6653.83 25.99 0.00 0.00 0.00 0.00 0.00 00:07:13.367 00:07:14.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.302 Nvme0n1 : 7.00 6664.86 26.03 0.00 0.00 0.00 0.00 0.00 00:07:14.302 =================================================================================================================== 00:07:14.302 Total : 6664.86 26.03 0.00 0.00 0.00 0.00 0.00 00:07:14.302 00:07:15.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.237 Nvme0n1 : 8.00 6641.38 25.94 0.00 0.00 0.00 0.00 0.00 00:07:15.237 =================================================================================================================== 00:07:15.237 Total : 6641.38 25.94 0.00 0.00 0.00 0.00 0.00 00:07:15.237 00:07:16.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.182 Nvme0n1 : 9.00 6609.00 25.82 0.00 0.00 0.00 0.00 0.00 00:07:16.182 =================================================================================================================== 00:07:16.182 Total : 6609.00 25.82 0.00 0.00 0.00 0.00 0.00 00:07:16.182 00:07:17.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.142 Nvme0n1 : 10.00 6595.80 25.76 0.00 0.00 0.00 0.00 0.00 00:07:17.142 =================================================================================================================== 00:07:17.142 Total : 6595.80 25.76 0.00 0.00 0.00 0.00 0.00 00:07:17.142 00:07:17.142 00:07:17.142 Latency(us) 00:07:17.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.142 Nvme0n1 : 10.02 6597.52 25.77 0.00 0.00 19394.99 10247.45 85792.58 00:07:17.142 =================================================================================================================== 00:07:17.142 Total : 6597.52 25.77 0.00 0.00 19394.99 10247.45 85792.58 00:07:17.142 { 00:07:17.142 "results": [ 00:07:17.142 { 00:07:17.142 "job": "Nvme0n1", 00:07:17.142 "core_mask": "0x2", 00:07:17.142 "workload": "randwrite", 00:07:17.142 "status": "finished", 00:07:17.142 "queue_depth": 128, 00:07:17.142 "io_size": 4096, 00:07:17.142 "runtime": 10.016788, 00:07:17.142 "iops": 6597.524076580237, 00:07:17.142 "mibps": 25.771578424141552, 00:07:17.142 "io_failed": 0, 00:07:17.142 "io_timeout": 0, 00:07:17.142 "avg_latency_us": 19394.98985839388, 00:07:17.142 "min_latency_us": 10247.447272727273, 00:07:17.142 "max_latency_us": 85792.58181818182 00:07:17.142 } 00:07:17.142 ], 00:07:17.142 "core_count": 1 00:07:17.142 } 00:07:17.142 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63079 00:07:17.142 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 63079 ']' 00:07:17.142 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 63079 00:07:17.142 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:17.142 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.142 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63079 00:07:17.142 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:17.142 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:17.142 killing process with pid 63079 00:07:17.142 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63079' 00:07:17.142 Received shutdown signal, test time was about 10.000000 seconds 00:07:17.142 00:07:17.142 Latency(us) 00:07:17.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.142 =================================================================================================================== 00:07:17.142 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:17.142 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 63079 00:07:17.142 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 63079 00:07:17.403 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:17.662 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:17.921 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4967c089-8a06-49b2-ba9a-247610ae2a98 00:07:17.921 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:18.181 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:18.181 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:18.181 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:18.440 [2024-10-01 13:35:10.257216] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:18.699 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4967c089-8a06-49b2-ba9a-247610ae2a98 00:07:18.699 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:18.699 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4967c089-8a06-49b2-ba9a-247610ae2a98 00:07:18.699 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.699 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.699 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.699 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.699 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.699 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.699 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.699 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:18.699 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4967c089-8a06-49b2-ba9a-247610ae2a98 00:07:18.958 request: 00:07:18.958 { 00:07:18.958 "uuid": "4967c089-8a06-49b2-ba9a-247610ae2a98", 00:07:18.958 "method": "bdev_lvol_get_lvstores", 00:07:18.958 "req_id": 1 00:07:18.958 } 00:07:18.958 Got JSON-RPC error response 00:07:18.958 response: 00:07:18.958 { 00:07:18.958 "code": -19, 00:07:18.958 "message": "No such device" 00:07:18.958 } 00:07:18.958 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:18.958 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:18.958 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:18.958 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:18.958 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:19.218 aio_bdev 00:07:19.218 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4bb336e1-ed9e-40f7-91c0-311543f086d8 00:07:19.218 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=4bb336e1-ed9e-40f7-91c0-311543f086d8 00:07:19.218 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:19.218 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:19.218 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:19.218 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:19.218 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:19.477 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4bb336e1-ed9e-40f7-91c0-311543f086d8 -t 2000 00:07:19.736 [ 00:07:19.736 { 00:07:19.736 "name": "4bb336e1-ed9e-40f7-91c0-311543f086d8", 00:07:19.736 "aliases": [ 00:07:19.736 "lvs/lvol" 00:07:19.736 ], 00:07:19.736 "product_name": "Logical Volume", 00:07:19.736 "block_size": 4096, 00:07:19.736 "num_blocks": 38912, 00:07:19.736 "uuid": "4bb336e1-ed9e-40f7-91c0-311543f086d8", 00:07:19.736 "assigned_rate_limits": { 00:07:19.736 "rw_ios_per_sec": 0, 00:07:19.736 "rw_mbytes_per_sec": 0, 00:07:19.736 "r_mbytes_per_sec": 0, 00:07:19.736 "w_mbytes_per_sec": 0 00:07:19.736 }, 00:07:19.736 "claimed": false, 00:07:19.736 "zoned": false, 00:07:19.736 "supported_io_types": { 00:07:19.736 "read": true, 00:07:19.736 "write": true, 00:07:19.736 "unmap": true, 00:07:19.736 "flush": false, 00:07:19.736 "reset": true, 00:07:19.736 "nvme_admin": false, 00:07:19.736 "nvme_io": false, 00:07:19.736 "nvme_io_md": false, 00:07:19.736 "write_zeroes": true, 00:07:19.736 "zcopy": false, 00:07:19.736 "get_zone_info": false, 00:07:19.736 "zone_management": false, 00:07:19.736 "zone_append": false, 00:07:19.736 "compare": false, 00:07:19.736 "compare_and_write": false, 00:07:19.736 "abort": false, 00:07:19.736 "seek_hole": true, 00:07:19.736 "seek_data": true, 00:07:19.736 "copy": false, 00:07:19.736 "nvme_iov_md": false 00:07:19.736 }, 00:07:19.736 "driver_specific": { 00:07:19.736 "lvol": { 00:07:19.736 "lvol_store_uuid": "4967c089-8a06-49b2-ba9a-247610ae2a98", 00:07:19.736 "base_bdev": "aio_bdev", 00:07:19.736 "thin_provision": false, 00:07:19.736 "num_allocated_clusters": 38, 00:07:19.736 "snapshot": false, 00:07:19.736 "clone": false, 00:07:19.736 "esnap_clone": false 00:07:19.736 } 00:07:19.736 } 00:07:19.736 } 00:07:19.736 ] 00:07:19.736 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:19.736 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4967c089-8a06-49b2-ba9a-247610ae2a98 00:07:19.736 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:19.995 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:19.995 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4967c089-8a06-49b2-ba9a-247610ae2a98 00:07:19.995 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:20.253 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:20.253 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4bb336e1-ed9e-40f7-91c0-311543f086d8 00:07:20.513 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4967c089-8a06-49b2-ba9a-247610ae2a98 00:07:21.082 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:21.082 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:21.650 ************************************ 00:07:21.650 END TEST lvs_grow_clean 00:07:21.650 ************************************ 00:07:21.650 00:07:21.650 real 0m18.933s 00:07:21.650 user 0m17.969s 00:07:21.650 sys 0m2.496s 00:07:21.650 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.650 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:21.650 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:21.650 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:21.650 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.650 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.650 ************************************ 00:07:21.650 START TEST lvs_grow_dirty 00:07:21.650 ************************************ 00:07:21.650 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:21.650 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:21.650 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:21.650 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:21.650 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:21.651 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:21.651 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:21.651 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:21.651 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:21.651 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:21.910 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:21.910 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:22.168 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ca300554-4e00-4084-962b-c23b8b5e449e 00:07:22.168 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca300554-4e00-4084-962b-c23b8b5e449e 00:07:22.168 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:22.427 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:22.427 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:22.427 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ca300554-4e00-4084-962b-c23b8b5e449e lvol 150 00:07:22.687 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=168c39d7-a685-49ed-807c-c6c1fb3f7b28 00:07:22.687 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:22.687 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:22.946 [2024-10-01 13:35:14.697436] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:22.946 [2024-10-01 13:35:14.697533] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:22.946 true 00:07:22.946 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:22.946 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca300554-4e00-4084-962b-c23b8b5e449e 00:07:23.205 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:23.205 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:23.463 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 168c39d7-a685-49ed-807c-c6c1fb3f7b28 00:07:23.722 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:23.982 [2024-10-01 13:35:15.770192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:23.982 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:24.242 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63356 00:07:24.242 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:24.242 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:24.242 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63356 /var/tmp/bdevperf.sock 00:07:24.242 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 63356 ']' 00:07:24.242 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:24.242 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.242 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:24.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:24.242 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.242 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:24.502 [2024-10-01 13:35:16.141055] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:07:24.502 [2024-10-01 13:35:16.141161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63356 ] 00:07:24.502 [2024-10-01 13:35:16.277964] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.502 [2024-10-01 13:35:16.336864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.761 [2024-10-01 13:35:16.368728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.328 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.328 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:25.328 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:25.616 Nvme0n1 00:07:25.616 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:25.875 [ 00:07:25.875 { 00:07:25.875 "name": "Nvme0n1", 00:07:25.875 "aliases": [ 00:07:25.875 "168c39d7-a685-49ed-807c-c6c1fb3f7b28" 00:07:25.875 ], 00:07:25.875 "product_name": "NVMe disk", 00:07:25.875 "block_size": 4096, 00:07:25.875 "num_blocks": 38912, 00:07:25.875 "uuid": "168c39d7-a685-49ed-807c-c6c1fb3f7b28", 00:07:25.875 "numa_id": -1, 00:07:25.875 "assigned_rate_limits": { 00:07:25.875 "rw_ios_per_sec": 0, 00:07:25.875 "rw_mbytes_per_sec": 0, 00:07:25.875 "r_mbytes_per_sec": 0, 00:07:25.875 "w_mbytes_per_sec": 0 00:07:25.875 }, 00:07:25.875 "claimed": false, 00:07:25.875 "zoned": false, 00:07:25.875 "supported_io_types": { 00:07:25.875 "read": true, 00:07:25.875 "write": true, 00:07:25.875 "unmap": true, 00:07:25.875 "flush": true, 00:07:25.875 "reset": true, 00:07:25.875 "nvme_admin": true, 00:07:25.875 "nvme_io": true, 00:07:25.875 "nvme_io_md": false, 00:07:25.875 "write_zeroes": true, 00:07:25.875 "zcopy": false, 00:07:25.875 "get_zone_info": false, 00:07:25.875 "zone_management": false, 00:07:25.875 "zone_append": false, 00:07:25.875 "compare": true, 00:07:25.875 "compare_and_write": true, 00:07:25.875 "abort": true, 00:07:25.875 "seek_hole": false, 00:07:25.875 "seek_data": false, 00:07:25.875 "copy": true, 00:07:25.875 "nvme_iov_md": false 00:07:25.875 }, 00:07:25.875 "memory_domains": [ 00:07:25.875 { 00:07:25.875 "dma_device_id": "system", 00:07:25.875 "dma_device_type": 1 00:07:25.875 } 00:07:25.875 ], 00:07:25.875 "driver_specific": { 00:07:25.875 "nvme": [ 00:07:25.875 { 00:07:25.875 "trid": { 00:07:25.875 "trtype": "TCP", 00:07:25.875 "adrfam": "IPv4", 00:07:25.875 "traddr": "10.0.0.3", 00:07:25.875 "trsvcid": "4420", 00:07:25.875 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:25.875 }, 00:07:25.875 "ctrlr_data": { 00:07:25.875 "cntlid": 1, 00:07:25.875 "vendor_id": "0x8086", 00:07:25.875 "model_number": "SPDK bdev Controller", 00:07:25.875 "serial_number": "SPDK0", 00:07:25.875 "firmware_revision": "25.01", 00:07:25.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:25.875 "oacs": { 00:07:25.875 "security": 0, 00:07:25.875 "format": 0, 00:07:25.875 "firmware": 0, 00:07:25.875 "ns_manage": 0 00:07:25.875 }, 00:07:25.875 "multi_ctrlr": true, 00:07:25.875 "ana_reporting": false 00:07:25.875 }, 00:07:25.875 "vs": { 00:07:25.875 "nvme_version": "1.3" 00:07:25.875 }, 00:07:25.875 "ns_data": { 00:07:25.875 "id": 1, 00:07:25.875 "can_share": true 00:07:25.875 } 00:07:25.875 } 00:07:25.875 ], 00:07:25.875 "mp_policy": "active_passive" 00:07:25.875 } 00:07:25.875 } 00:07:25.875 ] 00:07:25.875 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63385 00:07:25.875 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:25.875 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:26.134 Running I/O for 10 seconds... 00:07:27.071 Latency(us) 00:07:27.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.071 Nvme0n1 : 1.00 6223.00 24.31 0.00 0.00 0.00 0.00 0.00 00:07:27.071 =================================================================================================================== 00:07:27.071 Total : 6223.00 24.31 0.00 0.00 0.00 0.00 0.00 00:07:27.071 00:07:28.008 13:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ca300554-4e00-4084-962b-c23b8b5e449e 00:07:28.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.008 Nvme0n1 : 2.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:07:28.008 =================================================================================================================== 00:07:28.008 Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:07:28.008 00:07:28.266 true 00:07:28.266 13:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca300554-4e00-4084-962b-c23b8b5e449e 00:07:28.266 13:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:28.525 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:28.525 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:28.525 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63385 00:07:29.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.092 Nvme0n1 : 3.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:29.092 =================================================================================================================== 00:07:29.092 Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:29.092 00:07:30.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.029 Nvme0n1 : 4.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:07:30.029 =================================================================================================================== 00:07:30.029 Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:07:30.029 00:07:30.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.965 Nvme0n1 : 5.00 6281.00 24.54 0.00 0.00 0.00 0.00 0.00 00:07:30.965 =================================================================================================================== 00:07:30.965 Total : 6281.00 24.54 0.00 0.00 0.00 0.00 0.00 00:07:30.965 00:07:32.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.342 Nvme0n1 : 6.00 6271.67 24.50 0.00 0.00 0.00 0.00 0.00 00:07:32.342 =================================================================================================================== 00:07:32.342 Total : 6271.67 24.50 0.00 0.00 0.00 0.00 0.00 00:07:32.342 00:07:33.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.277 Nvme0n1 : 7.00 6246.57 24.40 0.00 0.00 0.00 0.00 0.00 00:07:33.277 =================================================================================================================== 00:07:33.277 Total : 6246.57 24.40 0.00 0.00 0.00 0.00 0.00 00:07:33.277 00:07:34.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.216 Nvme0n1 : 8.00 6211.88 24.27 0.00 0.00 0.00 0.00 0.00 00:07:34.216 =================================================================================================================== 00:07:34.216 Total : 6211.88 24.27 0.00 0.00 0.00 0.00 0.00 00:07:34.216 00:07:35.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.154 Nvme0n1 : 9.00 6213.11 24.27 0.00 0.00 0.00 0.00 0.00 00:07:35.154 =================================================================================================================== 00:07:35.154 Total : 6213.11 24.27 0.00 0.00 0.00 0.00 0.00 00:07:35.154 00:07:36.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.092 Nvme0n1 : 10.00 6214.10 24.27 0.00 0.00 0.00 0.00 0.00 00:07:36.092 =================================================================================================================== 00:07:36.092 Total : 6214.10 24.27 0.00 0.00 0.00 0.00 0.00 00:07:36.092 00:07:36.092 00:07:36.092 Latency(us) 00:07:36.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.092 Nvme0n1 : 10.03 6210.92 24.26 0.00 0.00 20602.62 5630.14 115819.99 00:07:36.092 =================================================================================================================== 00:07:36.092 Total : 6210.92 24.26 0.00 0.00 20602.62 5630.14 115819.99 00:07:36.092 { 00:07:36.092 "results": [ 00:07:36.092 { 00:07:36.092 "job": "Nvme0n1", 00:07:36.092 "core_mask": "0x2", 00:07:36.092 "workload": "randwrite", 00:07:36.092 "status": "finished", 00:07:36.092 "queue_depth": 128, 00:07:36.092 "io_size": 4096, 00:07:36.092 "runtime": 10.025722, 00:07:36.092 "iops": 6210.924260616841, 00:07:36.092 "mibps": 24.261422893034535, 00:07:36.092 "io_failed": 0, 00:07:36.092 "io_timeout": 0, 00:07:36.092 "avg_latency_us": 20602.618976493482, 00:07:36.092 "min_latency_us": 5630.138181818182, 00:07:36.092 "max_latency_us": 115819.98545454546 00:07:36.092 } 00:07:36.092 ], 00:07:36.092 "core_count": 1 00:07:36.092 } 00:07:36.092 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63356 00:07:36.092 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 63356 ']' 00:07:36.092 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 63356 00:07:36.092 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:36.092 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.092 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63356 00:07:36.092 killing process with pid 63356 00:07:36.092 Received shutdown signal, test time was about 10.000000 seconds 00:07:36.092 00:07:36.092 Latency(us) 00:07:36.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.092 =================================================================================================================== 00:07:36.092 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:36.092 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:36.092 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:36.092 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63356' 00:07:36.092 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 63356 00:07:36.092 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 63356 00:07:36.353 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:36.612 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:36.871 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca300554-4e00-4084-962b-c23b8b5e449e 00:07:36.871 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:37.437 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:37.437 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:37.437 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 62996 00:07:37.437 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 62996 00:07:37.437 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 62996 Killed "${NVMF_APP[@]}" "$@" 00:07:37.437 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:37.437 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:37.437 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:37.437 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.437 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.437 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=63522 00:07:37.437 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:37.437 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 63522 00:07:37.437 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 63522 ']' 00:07:37.437 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.437 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.437 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.437 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.437 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.437 [2024-10-01 13:35:29.095417] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:07:37.437 [2024-10-01 13:35:29.095528] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.437 [2024-10-01 13:35:29.245578] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.697 [2024-10-01 13:35:29.301973] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.697 [2024-10-01 13:35:29.302042] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.697 [2024-10-01 13:35:29.302067] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.697 [2024-10-01 13:35:29.302074] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.697 [2024-10-01 13:35:29.302080] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.697 [2024-10-01 13:35:29.302110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.697 [2024-10-01 13:35:29.333066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.263 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.263 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:38.263 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:38.263 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.263 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:38.263 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.263 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:38.522 [2024-10-01 13:35:30.364105] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:38.522 [2024-10-01 13:35:30.364425] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:38.522 [2024-10-01 13:35:30.364585] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:38.780 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:38.780 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 168c39d7-a685-49ed-807c-c6c1fb3f7b28 00:07:38.780 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=168c39d7-a685-49ed-807c-c6c1fb3f7b28 00:07:38.780 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:38.780 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:38.780 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:38.780 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:38.780 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:39.038 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 168c39d7-a685-49ed-807c-c6c1fb3f7b28 -t 2000 00:07:39.296 [ 00:07:39.296 { 00:07:39.296 "name": "168c39d7-a685-49ed-807c-c6c1fb3f7b28", 00:07:39.296 "aliases": [ 00:07:39.296 "lvs/lvol" 00:07:39.296 ], 00:07:39.296 "product_name": "Logical Volume", 00:07:39.296 "block_size": 4096, 00:07:39.296 "num_blocks": 38912, 00:07:39.296 "uuid": "168c39d7-a685-49ed-807c-c6c1fb3f7b28", 00:07:39.296 "assigned_rate_limits": { 00:07:39.296 "rw_ios_per_sec": 0, 00:07:39.296 "rw_mbytes_per_sec": 0, 00:07:39.296 "r_mbytes_per_sec": 0, 00:07:39.296 "w_mbytes_per_sec": 0 00:07:39.296 }, 00:07:39.296 "claimed": false, 00:07:39.296 "zoned": false, 00:07:39.296 "supported_io_types": { 00:07:39.296 "read": true, 00:07:39.296 "write": true, 00:07:39.296 "unmap": true, 00:07:39.296 "flush": false, 00:07:39.296 "reset": true, 00:07:39.296 "nvme_admin": false, 00:07:39.296 "nvme_io": false, 00:07:39.296 "nvme_io_md": false, 00:07:39.296 "write_zeroes": true, 00:07:39.296 "zcopy": false, 00:07:39.296 "get_zone_info": false, 00:07:39.296 "zone_management": false, 00:07:39.296 "zone_append": false, 00:07:39.296 "compare": false, 00:07:39.296 "compare_and_write": false, 00:07:39.296 "abort": false, 00:07:39.296 "seek_hole": true, 00:07:39.296 "seek_data": true, 00:07:39.296 "copy": false, 00:07:39.296 "nvme_iov_md": false 00:07:39.296 }, 00:07:39.296 "driver_specific": { 00:07:39.296 "lvol": { 00:07:39.296 "lvol_store_uuid": "ca300554-4e00-4084-962b-c23b8b5e449e", 00:07:39.296 "base_bdev": "aio_bdev", 00:07:39.296 "thin_provision": false, 00:07:39.296 "num_allocated_clusters": 38, 00:07:39.296 "snapshot": false, 00:07:39.296 "clone": false, 00:07:39.296 "esnap_clone": false 00:07:39.296 } 00:07:39.296 } 00:07:39.296 } 00:07:39.296 ] 00:07:39.296 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:39.296 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca300554-4e00-4084-962b-c23b8b5e449e 00:07:39.296 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:39.555 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:39.555 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca300554-4e00-4084-962b-c23b8b5e449e 00:07:39.555 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:39.813 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:39.814 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:40.073 [2024-10-01 13:35:31.749824] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:40.073 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca300554-4e00-4084-962b-c23b8b5e449e 00:07:40.073 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:40.073 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca300554-4e00-4084-962b-c23b8b5e449e 00:07:40.073 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:40.073 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.073 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:40.073 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.073 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:40.073 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.073 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:40.073 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:40.073 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca300554-4e00-4084-962b-c23b8b5e449e 00:07:40.332 request: 00:07:40.332 { 00:07:40.332 "uuid": "ca300554-4e00-4084-962b-c23b8b5e449e", 00:07:40.332 "method": "bdev_lvol_get_lvstores", 00:07:40.332 "req_id": 1 00:07:40.332 } 00:07:40.332 Got JSON-RPC error response 00:07:40.332 response: 00:07:40.332 { 00:07:40.332 "code": -19, 00:07:40.332 "message": "No such device" 00:07:40.332 } 00:07:40.332 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:40.332 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.332 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.332 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.332 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:40.590 aio_bdev 00:07:40.590 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 168c39d7-a685-49ed-807c-c6c1fb3f7b28 00:07:40.590 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=168c39d7-a685-49ed-807c-c6c1fb3f7b28 00:07:40.590 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:40.590 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:40.590 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:40.591 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:40.591 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:40.849 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 168c39d7-a685-49ed-807c-c6c1fb3f7b28 -t 2000 00:07:41.107 [ 00:07:41.107 { 00:07:41.107 "name": "168c39d7-a685-49ed-807c-c6c1fb3f7b28", 00:07:41.107 "aliases": [ 00:07:41.107 "lvs/lvol" 00:07:41.107 ], 00:07:41.107 "product_name": "Logical Volume", 00:07:41.107 "block_size": 4096, 00:07:41.107 "num_blocks": 38912, 00:07:41.107 "uuid": "168c39d7-a685-49ed-807c-c6c1fb3f7b28", 00:07:41.107 "assigned_rate_limits": { 00:07:41.107 "rw_ios_per_sec": 0, 00:07:41.107 "rw_mbytes_per_sec": 0, 00:07:41.107 "r_mbytes_per_sec": 0, 00:07:41.107 "w_mbytes_per_sec": 0 00:07:41.107 }, 00:07:41.107 "claimed": false, 00:07:41.107 "zoned": false, 00:07:41.107 "supported_io_types": { 00:07:41.107 "read": true, 00:07:41.107 "write": true, 00:07:41.107 "unmap": true, 00:07:41.107 "flush": false, 00:07:41.107 "reset": true, 00:07:41.107 "nvme_admin": false, 00:07:41.107 "nvme_io": false, 00:07:41.107 "nvme_io_md": false, 00:07:41.107 "write_zeroes": true, 00:07:41.107 "zcopy": false, 00:07:41.107 "get_zone_info": false, 00:07:41.107 "zone_management": false, 00:07:41.107 "zone_append": false, 00:07:41.107 "compare": false, 00:07:41.107 "compare_and_write": false, 00:07:41.107 "abort": false, 00:07:41.107 "seek_hole": true, 00:07:41.107 "seek_data": true, 00:07:41.107 "copy": false, 00:07:41.107 "nvme_iov_md": false 00:07:41.107 }, 00:07:41.107 "driver_specific": { 00:07:41.107 "lvol": { 00:07:41.107 "lvol_store_uuid": "ca300554-4e00-4084-962b-c23b8b5e449e", 00:07:41.107 "base_bdev": "aio_bdev", 00:07:41.107 "thin_provision": false, 00:07:41.108 "num_allocated_clusters": 38, 00:07:41.108 "snapshot": false, 00:07:41.108 "clone": false, 00:07:41.108 "esnap_clone": false 00:07:41.108 } 00:07:41.108 } 00:07:41.108 } 00:07:41.108 ] 00:07:41.108 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:41.108 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca300554-4e00-4084-962b-c23b8b5e449e 00:07:41.108 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:41.366 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:41.366 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca300554-4e00-4084-962b-c23b8b5e449e 00:07:41.366 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:41.624 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:41.624 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 168c39d7-a685-49ed-807c-c6c1fb3f7b28 00:07:41.881 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ca300554-4e00-4084-962b-c23b8b5e449e 00:07:42.138 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:42.395 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:42.963 00:07:42.963 real 0m21.205s 00:07:42.963 user 0m43.666s 00:07:42.963 sys 0m8.855s 00:07:42.963 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.963 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:42.963 ************************************ 00:07:42.963 END TEST lvs_grow_dirty 00:07:42.963 ************************************ 00:07:42.963 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:42.963 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:42.963 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:42.963 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:42.963 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:42.963 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:42.963 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:42.963 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:42.963 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:42.963 nvmf_trace.0 00:07:42.963 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:42.963 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:42.963 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:42.963 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:43.225 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:43.225 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:43.225 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.225 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:43.225 rmmod nvme_tcp 00:07:43.225 rmmod nvme_fabrics 00:07:43.225 rmmod nvme_keyring 00:07:43.225 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.225 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:43.225 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:43.225 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 63522 ']' 00:07:43.225 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 63522 00:07:43.225 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 63522 ']' 00:07:43.225 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 63522 00:07:43.225 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:43.225 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.225 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63522 00:07:43.225 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.225 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.225 killing process with pid 63522 00:07:43.225 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63522' 00:07:43.225 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 63522 00:07:43.225 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 63522 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:43.484 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:43.744 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:43.744 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:43.744 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:43.745 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:43.745 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:43.745 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.745 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.745 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.745 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:07:43.745 00:07:43.745 real 0m43.128s 00:07:43.745 user 1m8.540s 00:07:43.745 sys 0m12.285s 00:07:43.745 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.745 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:43.745 ************************************ 00:07:43.745 END TEST nvmf_lvs_grow 00:07:43.745 ************************************ 00:07:43.745 13:35:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:43.745 13:35:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:43.745 13:35:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.745 13:35:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:43.745 ************************************ 00:07:43.745 START TEST nvmf_bdev_io_wait 00:07:43.745 ************************************ 00:07:43.745 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:43.745 * Looking for test storage... 00:07:44.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:44.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.005 --rc genhtml_branch_coverage=1 00:07:44.005 --rc genhtml_function_coverage=1 00:07:44.005 --rc genhtml_legend=1 00:07:44.005 --rc geninfo_all_blocks=1 00:07:44.005 --rc geninfo_unexecuted_blocks=1 00:07:44.005 00:07:44.005 ' 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:44.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.005 --rc genhtml_branch_coverage=1 00:07:44.005 --rc genhtml_function_coverage=1 00:07:44.005 --rc genhtml_legend=1 00:07:44.005 --rc geninfo_all_blocks=1 00:07:44.005 --rc geninfo_unexecuted_blocks=1 00:07:44.005 00:07:44.005 ' 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:44.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.005 --rc genhtml_branch_coverage=1 00:07:44.005 --rc genhtml_function_coverage=1 00:07:44.005 --rc genhtml_legend=1 00:07:44.005 --rc geninfo_all_blocks=1 00:07:44.005 --rc geninfo_unexecuted_blocks=1 00:07:44.005 00:07:44.005 ' 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:44.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.005 --rc genhtml_branch_coverage=1 00:07:44.005 --rc genhtml_function_coverage=1 00:07:44.005 --rc genhtml_legend=1 00:07:44.005 --rc geninfo_all_blocks=1 00:07:44.005 --rc geninfo_unexecuted_blocks=1 00:07:44.005 00:07:44.005 ' 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.005 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:44.005 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:44.006 Cannot find device "nvmf_init_br" 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:44.006 Cannot find device "nvmf_init_br2" 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:44.006 Cannot find device "nvmf_tgt_br" 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:44.006 Cannot find device "nvmf_tgt_br2" 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:44.006 Cannot find device "nvmf_init_br" 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:44.006 Cannot find device "nvmf_init_br2" 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:44.006 Cannot find device "nvmf_tgt_br" 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:44.006 Cannot find device "nvmf_tgt_br2" 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:44.006 Cannot find device "nvmf_br" 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:44.006 Cannot find device "nvmf_init_if" 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:07:44.006 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:44.265 Cannot find device "nvmf_init_if2" 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:44.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:44.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:44.265 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:44.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:44.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:07:44.265 00:07:44.265 --- 10.0.0.3 ping statistics --- 00:07:44.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.265 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:44.265 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:44.265 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:07:44.265 00:07:44.265 --- 10.0.0.4 ping statistics --- 00:07:44.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.265 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:44.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:44.265 00:07:44.265 --- 10.0.0.1 ping statistics --- 00:07:44.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.265 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:44.265 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:44.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:07:44.265 00:07:44.265 --- 10.0.0.2 ping statistics --- 00:07:44.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.265 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=63892 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 63892 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 63892 ']' 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.524 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.524 [2024-10-01 13:35:36.217850] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:07:44.524 [2024-10-01 13:35:36.217958] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.524 [2024-10-01 13:35:36.360632] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.783 [2024-10-01 13:35:36.433647] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.783 [2024-10-01 13:35:36.433710] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.783 [2024-10-01 13:35:36.433723] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.783 [2024-10-01 13:35:36.433733] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.783 [2024-10-01 13:35:36.433741] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.783 [2024-10-01 13:35:36.434730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.783 [2024-10-01 13:35:36.434829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.783 [2024-10-01 13:35:36.434966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.783 [2024-10-01 13:35:36.434976] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.783 [2024-10-01 13:35:36.562284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.783 [2024-10-01 13:35:36.577470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.783 Malloc0 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.783 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.045 [2024-10-01 13:35:36.642687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:45.045 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.045 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=63919 00:07:45.045 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:45.045 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:45.045 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=63921 00:07:45.045 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:07:45.045 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:07:45.045 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:45.045 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:45.045 { 00:07:45.045 "params": { 00:07:45.045 "name": "Nvme$subsystem", 00:07:45.045 "trtype": "$TEST_TRANSPORT", 00:07:45.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:45.045 "adrfam": "ipv4", 00:07:45.045 "trsvcid": "$NVMF_PORT", 00:07:45.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:45.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:45.045 "hdgst": ${hdgst:-false}, 00:07:45.045 "ddgst": ${ddgst:-false} 00:07:45.045 }, 00:07:45.045 "method": "bdev_nvme_attach_controller" 00:07:45.045 } 00:07:45.045 EOF 00:07:45.045 )") 00:07:45.045 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:45.045 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=63923 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:45.046 { 00:07:45.046 "params": { 00:07:45.046 "name": "Nvme$subsystem", 00:07:45.046 "trtype": "$TEST_TRANSPORT", 00:07:45.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:45.046 "adrfam": "ipv4", 00:07:45.046 "trsvcid": "$NVMF_PORT", 00:07:45.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:45.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:45.046 "hdgst": ${hdgst:-false}, 00:07:45.046 "ddgst": ${ddgst:-false} 00:07:45.046 }, 00:07:45.046 "method": "bdev_nvme_attach_controller" 00:07:45.046 } 00:07:45.046 EOF 00:07:45.046 )") 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=63926 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:45.046 { 00:07:45.046 "params": { 00:07:45.046 "name": "Nvme$subsystem", 00:07:45.046 "trtype": "$TEST_TRANSPORT", 00:07:45.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:45.046 "adrfam": "ipv4", 00:07:45.046 "trsvcid": "$NVMF_PORT", 00:07:45.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:45.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:45.046 "hdgst": ${hdgst:-false}, 00:07:45.046 "ddgst": ${ddgst:-false} 00:07:45.046 }, 00:07:45.046 "method": "bdev_nvme_attach_controller" 00:07:45.046 } 00:07:45.046 EOF 00:07:45.046 )") 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:45.046 { 00:07:45.046 "params": { 00:07:45.046 "name": "Nvme$subsystem", 00:07:45.046 "trtype": "$TEST_TRANSPORT", 00:07:45.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:45.046 "adrfam": "ipv4", 00:07:45.046 "trsvcid": "$NVMF_PORT", 00:07:45.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:45.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:45.046 "hdgst": ${hdgst:-false}, 00:07:45.046 "ddgst": ${ddgst:-false} 00:07:45.046 }, 00:07:45.046 "method": "bdev_nvme_attach_controller" 00:07:45.046 } 00:07:45.046 EOF 00:07:45.046 )") 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:45.046 "params": { 00:07:45.046 "name": "Nvme1", 00:07:45.046 "trtype": "tcp", 00:07:45.046 "traddr": "10.0.0.3", 00:07:45.046 "adrfam": "ipv4", 00:07:45.046 "trsvcid": "4420", 00:07:45.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:45.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:45.046 "hdgst": false, 00:07:45.046 "ddgst": false 00:07:45.046 }, 00:07:45.046 "method": "bdev_nvme_attach_controller" 00:07:45.046 }' 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:45.046 "params": { 00:07:45.046 "name": "Nvme1", 00:07:45.046 "trtype": "tcp", 00:07:45.046 "traddr": "10.0.0.3", 00:07:45.046 "adrfam": "ipv4", 00:07:45.046 "trsvcid": "4420", 00:07:45.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:45.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:45.046 "hdgst": false, 00:07:45.046 "ddgst": false 00:07:45.046 }, 00:07:45.046 "method": "bdev_nvme_attach_controller" 00:07:45.046 }' 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:45.046 "params": { 00:07:45.046 "name": "Nvme1", 00:07:45.046 "trtype": "tcp", 00:07:45.046 "traddr": "10.0.0.3", 00:07:45.046 "adrfam": "ipv4", 00:07:45.046 "trsvcid": "4420", 00:07:45.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:45.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:45.046 "hdgst": false, 00:07:45.046 "ddgst": false 00:07:45.046 }, 00:07:45.046 "method": "bdev_nvme_attach_controller" 00:07:45.046 }' 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:45.046 "params": { 00:07:45.046 "name": "Nvme1", 00:07:45.046 "trtype": "tcp", 00:07:45.046 "traddr": "10.0.0.3", 00:07:45.046 "adrfam": "ipv4", 00:07:45.046 "trsvcid": "4420", 00:07:45.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:45.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:45.046 "hdgst": false, 00:07:45.046 "ddgst": false 00:07:45.046 }, 00:07:45.046 "method": "bdev_nvme_attach_controller" 00:07:45.046 }' 00:07:45.046 [2024-10-01 13:35:36.712705] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:07:45.046 [2024-10-01 13:35:36.712799] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:45.046 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 63919 00:07:45.046 [2024-10-01 13:35:36.728761] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:07:45.046 [2024-10-01 13:35:36.728877] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:45.046 [2024-10-01 13:35:36.739686] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:07:45.046 [2024-10-01 13:35:36.739767] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:45.046 [2024-10-01 13:35:36.757357] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:07:45.046 [2024-10-01 13:35:36.757707] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:45.046 [2024-10-01 13:35:36.894411] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.305 [2024-10-01 13:35:36.939166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.305 [2024-10-01 13:35:36.951409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:07:45.305 [2024-10-01 13:35:36.984555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.305 [2024-10-01 13:35:36.987762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.305 [2024-10-01 13:35:36.995383] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:45.305 [2024-10-01 13:35:37.035267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.305 [2024-10-01 13:35:37.035840] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.305 [2024-10-01 13:35:37.045504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:07:45.305 [2024-10-01 13:35:37.079265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.305 [2024-10-01 13:35:37.092689] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:07:45.305 Running I/O for 1 seconds... 00:07:45.305 [2024-10-01 13:35:37.125662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.305 Running I/O for 1 seconds... 00:07:45.564 Running I/O for 1 seconds... 00:07:45.564 Running I/O for 1 seconds... 00:07:46.498 8930.00 IOPS, 34.88 MiB/s 00:07:46.499 Latency(us) 00:07:46.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.499 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:46.499 Nvme1n1 : 1.01 8969.83 35.04 0.00 0.00 14194.54 8579.26 20137.43 00:07:46.499 =================================================================================================================== 00:07:46.499 Total : 8969.83 35.04 0.00 0.00 14194.54 8579.26 20137.43 00:07:46.499 8249.00 IOPS, 32.22 MiB/s 00:07:46.499 Latency(us) 00:07:46.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.499 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:46.499 Nvme1n1 : 1.01 8307.83 32.45 0.00 0.00 15331.02 6494.02 25380.31 00:07:46.499 =================================================================================================================== 00:07:46.499 Total : 8307.83 32.45 0.00 0.00 15331.02 6494.02 25380.31 00:07:46.499 8475.00 IOPS, 33.11 MiB/s 00:07:46.499 Latency(us) 00:07:46.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.499 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:46.499 Nvme1n1 : 1.01 8561.37 33.44 0.00 0.00 14891.71 6345.08 27048.49 00:07:46.499 =================================================================================================================== 00:07:46.499 Total : 8561.37 33.44 0.00 0.00 14891.71 6345.08 27048.49 00:07:46.499 166456.00 IOPS, 650.22 MiB/s 00:07:46.499 Latency(us) 00:07:46.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.499 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:46.499 Nvme1n1 : 1.00 166086.65 648.78 0.00 0.00 766.52 396.57 2219.29 00:07:46.499 =================================================================================================================== 00:07:46.499 Total : 166086.65 648.78 0.00 0.00 766.52 396.57 2219.29 00:07:46.499 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 63921 00:07:46.499 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 63923 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 63926 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:46.758 rmmod nvme_tcp 00:07:46.758 rmmod nvme_fabrics 00:07:46.758 rmmod nvme_keyring 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 63892 ']' 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 63892 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 63892 ']' 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 63892 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63892 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.758 killing process with pid 63892 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63892' 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 63892 00:07:46.758 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 63892 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:47.017 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:47.276 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:47.276 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:47.276 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.276 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.276 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.276 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:07:47.276 00:07:47.276 real 0m3.424s 00:07:47.276 user 0m13.471s 00:07:47.276 sys 0m2.131s 00:07:47.276 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.276 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.276 ************************************ 00:07:47.276 END TEST nvmf_bdev_io_wait 00:07:47.276 ************************************ 00:07:47.276 13:35:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:47.276 13:35:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:47.276 13:35:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.276 13:35:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:47.276 ************************************ 00:07:47.276 START TEST nvmf_queue_depth 00:07:47.276 ************************************ 00:07:47.276 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:47.276 * Looking for test storage... 00:07:47.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:47.276 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:47.276 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:07:47.276 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:47.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.536 --rc genhtml_branch_coverage=1 00:07:47.536 --rc genhtml_function_coverage=1 00:07:47.536 --rc genhtml_legend=1 00:07:47.536 --rc geninfo_all_blocks=1 00:07:47.536 --rc geninfo_unexecuted_blocks=1 00:07:47.536 00:07:47.536 ' 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:47.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.536 --rc genhtml_branch_coverage=1 00:07:47.536 --rc genhtml_function_coverage=1 00:07:47.536 --rc genhtml_legend=1 00:07:47.536 --rc geninfo_all_blocks=1 00:07:47.536 --rc geninfo_unexecuted_blocks=1 00:07:47.536 00:07:47.536 ' 00:07:47.536 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:47.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.537 --rc genhtml_branch_coverage=1 00:07:47.537 --rc genhtml_function_coverage=1 00:07:47.537 --rc genhtml_legend=1 00:07:47.537 --rc geninfo_all_blocks=1 00:07:47.537 --rc geninfo_unexecuted_blocks=1 00:07:47.537 00:07:47.537 ' 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:47.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.537 --rc genhtml_branch_coverage=1 00:07:47.537 --rc genhtml_function_coverage=1 00:07:47.537 --rc genhtml_legend=1 00:07:47.537 --rc geninfo_all_blocks=1 00:07:47.537 --rc geninfo_unexecuted_blocks=1 00:07:47.537 00:07:47.537 ' 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:47.537 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:47.537 Cannot find device "nvmf_init_br" 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:47.537 Cannot find device "nvmf_init_br2" 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:47.537 Cannot find device "nvmf_tgt_br" 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:47.537 Cannot find device "nvmf_tgt_br2" 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:47.537 Cannot find device "nvmf_init_br" 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:07:47.537 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:47.538 Cannot find device "nvmf_init_br2" 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:47.538 Cannot find device "nvmf_tgt_br" 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:47.538 Cannot find device "nvmf_tgt_br2" 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:47.538 Cannot find device "nvmf_br" 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:47.538 Cannot find device "nvmf_init_if" 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:47.538 Cannot find device "nvmf_init_if2" 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:47.538 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:47.538 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:47.538 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:47.797 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:47.797 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:07:47.797 00:07:47.797 --- 10.0.0.3 ping statistics --- 00:07:47.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.797 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:47.797 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:47.797 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:07:47.797 00:07:47.797 --- 10.0.0.4 ping statistics --- 00:07:47.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.797 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:47.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:47.797 00:07:47.797 --- 10.0.0.1 ping statistics --- 00:07:47.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.797 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:47.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:07:47.797 00:07:47.797 --- 10.0.0.2 ping statistics --- 00:07:47.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.797 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=64180 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 64180 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64180 ']' 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.797 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.797 [2024-10-01 13:35:39.648637] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:07:47.797 [2024-10-01 13:35:39.648731] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.056 [2024-10-01 13:35:39.789908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.056 [2024-10-01 13:35:39.848034] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.056 [2024-10-01 13:35:39.848271] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.056 [2024-10-01 13:35:39.848364] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.056 [2024-10-01 13:35:39.848431] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.056 [2024-10-01 13:35:39.848501] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.056 [2024-10-01 13:35:39.848663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.056 [2024-10-01 13:35:39.878197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.315 [2024-10-01 13:35:39.969346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.315 Malloc0 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.315 13:35:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.315 [2024-10-01 13:35:40.017517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64210 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64210 /var/tmp/bdevperf.sock 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64210 ']' 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.315 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.315 [2024-10-01 13:35:40.077691] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:07:48.315 [2024-10-01 13:35:40.077801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64210 ] 00:07:48.574 [2024-10-01 13:35:40.219404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.574 [2024-10-01 13:35:40.291307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.574 [2024-10-01 13:35:40.325813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.574 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.574 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:48.574 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:48.574 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.574 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.833 NVMe0n1 00:07:48.833 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.833 13:35:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:48.833 Running I/O for 10 seconds... 00:07:59.110 6226.00 IOPS, 24.32 MiB/s 6765.50 IOPS, 26.43 MiB/s 7144.00 IOPS, 27.91 MiB/s 7294.25 IOPS, 28.49 MiB/s 7405.20 IOPS, 28.93 MiB/s 7539.00 IOPS, 29.45 MiB/s 7630.00 IOPS, 29.80 MiB/s 7700.38 IOPS, 30.08 MiB/s 7794.44 IOPS, 30.45 MiB/s 7869.90 IOPS, 30.74 MiB/s 00:07:59.110 Latency(us) 00:07:59.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.110 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:59.110 Verification LBA range: start 0x0 length 0x4000 00:07:59.110 NVMe0n1 : 10.08 7892.01 30.83 0.00 0.00 129083.12 24665.37 98184.84 00:07:59.110 =================================================================================================================== 00:07:59.110 Total : 7892.01 30.83 0.00 0.00 129083.12 24665.37 98184.84 00:07:59.110 { 00:07:59.110 "results": [ 00:07:59.110 { 00:07:59.110 "job": "NVMe0n1", 00:07:59.110 "core_mask": "0x1", 00:07:59.110 "workload": "verify", 00:07:59.110 "status": "finished", 00:07:59.110 "verify_range": { 00:07:59.110 "start": 0, 00:07:59.110 "length": 16384 00:07:59.110 }, 00:07:59.110 "queue_depth": 1024, 00:07:59.110 "io_size": 4096, 00:07:59.110 "runtime": 10.080327, 00:07:59.110 "iops": 7892.005884332919, 00:07:59.110 "mibps": 30.828147985675464, 00:07:59.110 "io_failed": 0, 00:07:59.110 "io_timeout": 0, 00:07:59.110 "avg_latency_us": 129083.11816851677, 00:07:59.110 "min_latency_us": 24665.36727272727, 00:07:59.110 "max_latency_us": 98184.84363636364 00:07:59.110 } 00:07:59.110 ], 00:07:59.110 "core_count": 1 00:07:59.110 } 00:07:59.110 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64210 00:07:59.110 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64210 ']' 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64210 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64210 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.111 killing process with pid 64210 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64210' 00:07:59.111 Received shutdown signal, test time was about 10.000000 seconds 00:07:59.111 00:07:59.111 Latency(us) 00:07:59.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.111 =================================================================================================================== 00:07:59.111 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64210 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64210 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.111 rmmod nvme_tcp 00:07:59.111 rmmod nvme_fabrics 00:07:59.111 rmmod nvme_keyring 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 64180 ']' 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 64180 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64180 ']' 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64180 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.111 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64180 00:07:59.370 killing process with pid 64180 00:07:59.370 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:59.370 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:59.370 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64180' 00:07:59.370 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64180 00:07:59.370 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64180 00:07:59.370 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:59.370 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:59.370 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:59.370 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:59.370 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:59.370 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:07:59.370 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:07:59.370 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.370 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:59.370 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:59.370 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:59.370 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:59.370 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:59.370 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:07:59.629 00:07:59.629 real 0m12.410s 00:07:59.629 user 0m21.180s 00:07:59.629 sys 0m2.123s 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.629 ************************************ 00:07:59.629 END TEST nvmf_queue_depth 00:07:59.629 ************************************ 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.629 ************************************ 00:07:59.629 START TEST nvmf_target_multipath 00:07:59.629 ************************************ 00:07:59.629 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:59.889 * Looking for test storage... 00:07:59.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:59.889 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:59.889 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:07:59.889 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:59.889 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:59.889 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.889 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.889 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:59.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.890 --rc genhtml_branch_coverage=1 00:07:59.890 --rc genhtml_function_coverage=1 00:07:59.890 --rc genhtml_legend=1 00:07:59.890 --rc geninfo_all_blocks=1 00:07:59.890 --rc geninfo_unexecuted_blocks=1 00:07:59.890 00:07:59.890 ' 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:59.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.890 --rc genhtml_branch_coverage=1 00:07:59.890 --rc genhtml_function_coverage=1 00:07:59.890 --rc genhtml_legend=1 00:07:59.890 --rc geninfo_all_blocks=1 00:07:59.890 --rc geninfo_unexecuted_blocks=1 00:07:59.890 00:07:59.890 ' 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:59.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.890 --rc genhtml_branch_coverage=1 00:07:59.890 --rc genhtml_function_coverage=1 00:07:59.890 --rc genhtml_legend=1 00:07:59.890 --rc geninfo_all_blocks=1 00:07:59.890 --rc geninfo_unexecuted_blocks=1 00:07:59.890 00:07:59.890 ' 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:59.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.890 --rc genhtml_branch_coverage=1 00:07:59.890 --rc genhtml_function_coverage=1 00:07:59.890 --rc genhtml_legend=1 00:07:59.890 --rc geninfo_all_blocks=1 00:07:59.890 --rc geninfo_unexecuted_blocks=1 00:07:59.890 00:07:59.890 ' 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.890 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.890 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:59.891 Cannot find device "nvmf_init_br" 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:59.891 Cannot find device "nvmf_init_br2" 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:59.891 Cannot find device "nvmf_tgt_br" 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:59.891 Cannot find device "nvmf_tgt_br2" 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:07:59.891 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:59.891 Cannot find device "nvmf_init_br" 00:08:00.150 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:00.150 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:00.150 Cannot find device "nvmf_init_br2" 00:08:00.150 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:00.150 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:00.150 Cannot find device "nvmf_tgt_br" 00:08:00.150 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:00.151 Cannot find device "nvmf_tgt_br2" 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:00.151 Cannot find device "nvmf_br" 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:00.151 Cannot find device "nvmf_init_if" 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:00.151 Cannot find device "nvmf_init_if2" 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:00.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:00.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:00.151 13:35:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:00.410 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:00.410 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:08:00.410 00:08:00.410 --- 10.0.0.3 ping statistics --- 00:08:00.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.410 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:00.410 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:00.410 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:08:00.410 00:08:00.410 --- 10.0.0.4 ping statistics --- 00:08:00.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.410 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:00.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:08:00.410 00:08:00.410 --- 10.0.0.1 ping statistics --- 00:08:00.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.410 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:00.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:08:00.410 00:08:00.410 --- 10.0.0.2 ping statistics --- 00:08:00.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.410 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:00.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=64570 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 64570 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 64570 ']' 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.410 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.411 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:00.411 [2024-10-01 13:35:52.162131] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:08:00.411 [2024-10-01 13:35:52.162393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.729 [2024-10-01 13:35:52.303020] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.729 [2024-10-01 13:35:52.374787] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.729 [2024-10-01 13:35:52.375087] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.729 [2024-10-01 13:35:52.375255] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.729 [2024-10-01 13:35:52.375378] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.729 [2024-10-01 13:35:52.375392] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.729 [2024-10-01 13:35:52.375573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.729 [2024-10-01 13:35:52.375679] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.729 [2024-10-01 13:35:52.375835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.729 [2024-10-01 13:35:52.375843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.729 [2024-10-01 13:35:52.409970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.729 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.729 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:08:00.729 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:00.729 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:00.729 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:00.729 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.729 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:00.988 [2024-10-01 13:35:52.746917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.988 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:01.246 Malloc0 00:08:01.246 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:01.814 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:01.814 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:02.073 [2024-10-01 13:35:53.882468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:02.073 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:02.331 [2024-10-01 13:35:54.142795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:02.331 13:35:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid=2b7d6042-0a58-4103-9990-589a1a785035 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:02.589 13:35:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid=2b7d6042-0a58-4103-9990-589a1a785035 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:02.589 13:35:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:02.589 13:35:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:02.589 13:35:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:02.589 13:35:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:02.590 13:35:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64658 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:05.123 13:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:05.123 [global] 00:08:05.123 thread=1 00:08:05.123 invalidate=1 00:08:05.123 rw=randrw 00:08:05.123 time_based=1 00:08:05.123 runtime=6 00:08:05.123 ioengine=libaio 00:08:05.123 direct=1 00:08:05.123 bs=4096 00:08:05.123 iodepth=128 00:08:05.123 norandommap=0 00:08:05.123 numjobs=1 00:08:05.123 00:08:05.123 verify_dump=1 00:08:05.123 verify_backlog=512 00:08:05.123 verify_state_save=0 00:08:05.123 do_verify=1 00:08:05.123 verify=crc32c-intel 00:08:05.123 [job0] 00:08:05.123 filename=/dev/nvme0n1 00:08:05.123 Could not set queue depth (nvme0n1) 00:08:05.123 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:05.123 fio-3.35 00:08:05.123 Starting 1 thread 00:08:05.690 13:35:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:05.949 13:35:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:06.210 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:06.210 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:06.210 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:06.210 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:06.210 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:06.210 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:06.210 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:06.210 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:06.210 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:06.210 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:06.210 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:06.210 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:06.210 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:06.468 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:07.035 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:07.035 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:07.035 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:07.035 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:07.035 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:07.035 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:07.035 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:07.035 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:07.035 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:07.035 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:07.035 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:07.035 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:07.035 13:35:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64658 00:08:11.232 00:08:11.232 job0: (groupid=0, jobs=1): err= 0: pid=64679: Tue Oct 1 13:36:02 2024 00:08:11.232 read: IOPS=10.3k, BW=40.4MiB/s (42.4MB/s)(243MiB/6007msec) 00:08:11.232 slat (usec): min=3, max=7454, avg=56.74, stdev=225.21 00:08:11.232 clat (usec): min=1335, max=18406, avg=8453.95, stdev=1469.79 00:08:11.232 lat (usec): min=1348, max=18417, avg=8510.69, stdev=1473.59 00:08:11.232 clat percentiles (usec): 00:08:11.232 | 1.00th=[ 4490], 5.00th=[ 6521], 10.00th=[ 7308], 20.00th=[ 7701], 00:08:11.232 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:08:11.232 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[12125], 00:08:11.232 | 99.00th=[13173], 99.50th=[13435], 99.90th=[15664], 99.95th=[16188], 00:08:11.232 | 99.99th=[18220] 00:08:11.232 bw ( KiB/s): min= 2752, max=27560, per=51.77%, avg=21419.33, stdev=7055.05, samples=12 00:08:11.232 iops : min= 688, max= 6890, avg=5354.83, stdev=1763.76, samples=12 00:08:11.232 write: IOPS=6059, BW=23.7MiB/s (24.8MB/s)(126MiB/5315msec); 0 zone resets 00:08:11.232 slat (usec): min=13, max=8575, avg=66.40, stdev=162.34 00:08:11.232 clat (usec): min=1207, max=15932, avg=7340.55, stdev=1294.74 00:08:11.232 lat (usec): min=1242, max=15956, avg=7406.95, stdev=1299.19 00:08:11.232 clat percentiles (usec): 00:08:11.232 | 1.00th=[ 3392], 5.00th=[ 4424], 10.00th=[ 5735], 20.00th=[ 6849], 00:08:11.232 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7701], 00:08:11.232 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8717], 00:08:11.232 | 99.00th=[11469], 99.50th=[11863], 99.90th=[13173], 99.95th=[13566], 00:08:11.232 | 99.99th=[14615] 00:08:11.232 bw ( KiB/s): min= 2656, max=27032, per=88.40%, avg=21425.33, stdev=6898.92, samples=12 00:08:11.232 iops : min= 664, max= 6758, avg=5356.33, stdev=1724.73, samples=12 00:08:11.232 lat (msec) : 2=0.03%, 4=1.30%, 10=92.82%, 20=5.84% 00:08:11.232 cpu : usr=5.49%, sys=22.34%, ctx=5463, majf=0, minf=139 00:08:11.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:11.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:11.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:11.232 issued rwts: total=62128,32204,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:11.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:11.232 00:08:11.232 Run status group 0 (all jobs): 00:08:11.232 READ: bw=40.4MiB/s (42.4MB/s), 40.4MiB/s-40.4MiB/s (42.4MB/s-42.4MB/s), io=243MiB (254MB), run=6007-6007msec 00:08:11.232 WRITE: bw=23.7MiB/s (24.8MB/s), 23.7MiB/s-23.7MiB/s (24.8MB/s-24.8MB/s), io=126MiB (132MB), run=5315-5315msec 00:08:11.232 00:08:11.232 Disk stats (read/write): 00:08:11.232 nvme0n1: ios=61327/31474, merge=0/0, ticks=497000/216678, in_queue=713678, util=98.65% 00:08:11.232 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:11.232 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64760 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:11.799 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:11.799 [global] 00:08:11.799 thread=1 00:08:11.799 invalidate=1 00:08:11.799 rw=randrw 00:08:11.799 time_based=1 00:08:11.799 runtime=6 00:08:11.799 ioengine=libaio 00:08:11.799 direct=1 00:08:11.799 bs=4096 00:08:11.799 iodepth=128 00:08:11.799 norandommap=0 00:08:11.799 numjobs=1 00:08:11.799 00:08:11.799 verify_dump=1 00:08:11.799 verify_backlog=512 00:08:11.799 verify_state_save=0 00:08:11.799 do_verify=1 00:08:11.799 verify=crc32c-intel 00:08:11.799 [job0] 00:08:11.799 filename=/dev/nvme0n1 00:08:11.799 Could not set queue depth (nvme0n1) 00:08:11.799 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:11.799 fio-3.35 00:08:11.799 Starting 1 thread 00:08:12.735 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:12.994 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:13.253 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:13.253 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:13.253 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:13.253 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:13.253 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:13.253 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:13.253 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:13.253 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:13.253 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:13.253 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:13.253 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:13.253 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:13.253 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:13.512 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:13.770 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:13.770 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:13.770 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:13.770 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:13.770 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:13.770 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:13.770 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:13.770 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:13.770 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:13.770 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:13.770 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:13.770 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:13.770 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64760 00:08:17.957 00:08:17.957 job0: (groupid=0, jobs=1): err= 0: pid=64787: Tue Oct 1 13:36:09 2024 00:08:17.957 read: IOPS=11.4k, BW=44.5MiB/s (46.7MB/s)(267MiB/6007msec) 00:08:17.957 slat (usec): min=4, max=6127, avg=42.83, stdev=188.04 00:08:17.957 clat (usec): min=839, max=14930, avg=7670.31, stdev=1963.13 00:08:17.957 lat (usec): min=852, max=14943, avg=7713.14, stdev=1978.71 00:08:17.957 clat percentiles (usec): 00:08:17.957 | 1.00th=[ 2999], 5.00th=[ 4047], 10.00th=[ 4817], 20.00th=[ 5997], 00:08:17.957 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8356], 00:08:17.957 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9241], 95.00th=[10683], 00:08:17.957 | 99.00th=[13173], 99.50th=[13435], 99.90th=[13960], 99.95th=[14222], 00:08:17.957 | 99.99th=[14746] 00:08:17.957 bw ( KiB/s): min= 7744, max=38944, per=52.80%, avg=24056.92, stdev=9101.10, samples=12 00:08:17.957 iops : min= 1936, max= 9736, avg=6014.17, stdev=2275.17, samples=12 00:08:17.957 write: IOPS=6850, BW=26.8MiB/s (28.1MB/s)(141MiB/5280msec); 0 zone resets 00:08:17.957 slat (usec): min=15, max=1895, avg=57.29, stdev=136.25 00:08:17.957 clat (usec): min=1536, max=14647, avg=6572.85, stdev=1763.32 00:08:17.957 lat (usec): min=1561, max=14672, avg=6630.14, stdev=1778.17 00:08:17.957 clat percentiles (usec): 00:08:17.957 | 1.00th=[ 2835], 5.00th=[ 3458], 10.00th=[ 3916], 20.00th=[ 4555], 00:08:17.957 | 30.00th=[ 5538], 40.00th=[ 6849], 50.00th=[ 7242], 60.00th=[ 7504], 00:08:17.957 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8225], 95.00th=[ 8455], 00:08:17.957 | 99.00th=[10945], 99.50th=[11731], 99.90th=[13042], 99.95th=[13829], 00:08:17.957 | 99.99th=[14615] 00:08:17.957 bw ( KiB/s): min= 8192, max=38602, per=87.82%, avg=24065.50, stdev=8889.86, samples=12 00:08:17.957 iops : min= 2048, max= 9650, avg=6016.33, stdev=2222.39, samples=12 00:08:17.957 lat (usec) : 1000=0.01% 00:08:17.957 lat (msec) : 2=0.15%, 4=6.82%, 10=88.99%, 20=4.04% 00:08:17.957 cpu : usr=6.04%, sys=24.48%, ctx=5876, majf=0, minf=108 00:08:17.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:17.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:17.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:17.957 issued rwts: total=68427,36171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:17.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:17.957 00:08:17.957 Run status group 0 (all jobs): 00:08:17.957 READ: bw=44.5MiB/s (46.7MB/s), 44.5MiB/s-44.5MiB/s (46.7MB/s-46.7MB/s), io=267MiB (280MB), run=6007-6007msec 00:08:17.957 WRITE: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=141MiB (148MB), run=5280-5280msec 00:08:17.957 00:08:17.957 Disk stats (read/write): 00:08:17.957 nvme0n1: ios=67594/35610, merge=0/0, ticks=494251/217689, in_queue=711940, util=98.62% 00:08:17.957 13:36:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:18.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:18.213 13:36:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:18.213 13:36:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:08:18.213 13:36:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:18.213 13:36:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:18.213 13:36:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:18.213 13:36:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:18.213 13:36:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:08:18.213 13:36:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:18.471 rmmod nvme_tcp 00:08:18.471 rmmod nvme_fabrics 00:08:18.471 rmmod nvme_keyring 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 64570 ']' 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 64570 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 64570 ']' 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 64570 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64570 00:08:18.471 killing process with pid 64570 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64570' 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 64570 00:08:18.471 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 64570 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:18.729 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:18.988 00:08:18.988 real 0m19.185s 00:08:18.988 user 1m11.016s 00:08:18.988 sys 0m9.711s 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.988 ************************************ 00:08:18.988 END TEST nvmf_target_multipath 00:08:18.988 ************************************ 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:18.988 ************************************ 00:08:18.988 START TEST nvmf_zcopy 00:08:18.988 ************************************ 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:18.988 * Looking for test storage... 00:08:18.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:08:18.988 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:19.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.247 --rc genhtml_branch_coverage=1 00:08:19.247 --rc genhtml_function_coverage=1 00:08:19.247 --rc genhtml_legend=1 00:08:19.247 --rc geninfo_all_blocks=1 00:08:19.247 --rc geninfo_unexecuted_blocks=1 00:08:19.247 00:08:19.247 ' 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:19.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.247 --rc genhtml_branch_coverage=1 00:08:19.247 --rc genhtml_function_coverage=1 00:08:19.247 --rc genhtml_legend=1 00:08:19.247 --rc geninfo_all_blocks=1 00:08:19.247 --rc geninfo_unexecuted_blocks=1 00:08:19.247 00:08:19.247 ' 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:19.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.247 --rc genhtml_branch_coverage=1 00:08:19.247 --rc genhtml_function_coverage=1 00:08:19.247 --rc genhtml_legend=1 00:08:19.247 --rc geninfo_all_blocks=1 00:08:19.247 --rc geninfo_unexecuted_blocks=1 00:08:19.247 00:08:19.247 ' 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:19.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.247 --rc genhtml_branch_coverage=1 00:08:19.247 --rc genhtml_function_coverage=1 00:08:19.247 --rc genhtml_legend=1 00:08:19.247 --rc geninfo_all_blocks=1 00:08:19.247 --rc geninfo_unexecuted_blocks=1 00:08:19.247 00:08:19.247 ' 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.247 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:19.248 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:19.248 Cannot find device "nvmf_init_br" 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:19.248 Cannot find device "nvmf_init_br2" 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:19.248 Cannot find device "nvmf_tgt_br" 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:19.248 Cannot find device "nvmf_tgt_br2" 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:19.248 Cannot find device "nvmf_init_br" 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:19.248 Cannot find device "nvmf_init_br2" 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:19.248 Cannot find device "nvmf_tgt_br" 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:19.248 Cannot find device "nvmf_tgt_br2" 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:19.248 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:19.248 Cannot find device "nvmf_br" 00:08:19.248 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:19.248 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:19.248 Cannot find device "nvmf_init_if" 00:08:19.248 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:19.248 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:19.248 Cannot find device "nvmf_init_if2" 00:08:19.248 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:19.248 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:19.248 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:19.248 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:19.248 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:19.248 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:19.248 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:19.248 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:19.248 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:19.248 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:19.248 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:19.248 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:19.248 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:19.507 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:19.508 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:19.508 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:08:19.508 00:08:19.508 --- 10.0.0.3 ping statistics --- 00:08:19.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.508 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:19.508 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:19.508 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:08:19.508 00:08:19.508 --- 10.0.0.4 ping statistics --- 00:08:19.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.508 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:19.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:08:19.508 00:08:19.508 --- 10.0.0.1 ping statistics --- 00:08:19.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.508 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:19.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:19.508 00:08:19.508 --- 10.0.0.2 ping statistics --- 00:08:19.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.508 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=65083 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 65083 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 65083 ']' 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.508 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.766 [2024-10-01 13:36:11.388208] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:08:19.766 [2024-10-01 13:36:11.388314] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.766 [2024-10-01 13:36:11.530883] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.766 [2024-10-01 13:36:11.600262] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.766 [2024-10-01 13:36:11.600315] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.766 [2024-10-01 13:36:11.600329] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.766 [2024-10-01 13:36:11.600340] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.766 [2024-10-01 13:36:11.600349] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.766 [2024-10-01 13:36:11.600378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.024 [2024-10-01 13:36:11.634459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.612 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.612 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:20.612 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:20.612 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:20.612 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.612 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.612 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.613 [2024-10-01 13:36:12.436790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.613 [2024-10-01 13:36:12.452874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.613 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.872 malloc0 00:08:20.872 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.872 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:20.872 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.872 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.872 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.872 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:20.872 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:20.872 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:08:20.872 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:08:20.872 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:20.872 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:20.872 { 00:08:20.872 "params": { 00:08:20.872 "name": "Nvme$subsystem", 00:08:20.872 "trtype": "$TEST_TRANSPORT", 00:08:20.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:20.872 "adrfam": "ipv4", 00:08:20.872 "trsvcid": "$NVMF_PORT", 00:08:20.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:20.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:20.872 "hdgst": ${hdgst:-false}, 00:08:20.872 "ddgst": ${ddgst:-false} 00:08:20.872 }, 00:08:20.872 "method": "bdev_nvme_attach_controller" 00:08:20.872 } 00:08:20.872 EOF 00:08:20.872 )") 00:08:20.872 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:08:20.872 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:08:20.872 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:08:20.872 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:20.872 "params": { 00:08:20.872 "name": "Nvme1", 00:08:20.872 "trtype": "tcp", 00:08:20.872 "traddr": "10.0.0.3", 00:08:20.872 "adrfam": "ipv4", 00:08:20.872 "trsvcid": "4420", 00:08:20.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:20.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:20.872 "hdgst": false, 00:08:20.872 "ddgst": false 00:08:20.872 }, 00:08:20.872 "method": "bdev_nvme_attach_controller" 00:08:20.872 }' 00:08:20.872 [2024-10-01 13:36:12.553811] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:08:20.872 [2024-10-01 13:36:12.553899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65116 ] 00:08:20.872 [2024-10-01 13:36:12.693210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.130 [2024-10-01 13:36:12.765307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.130 [2024-10-01 13:36:12.807442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.130 Running I/O for 10 seconds... 00:08:31.398 5866.00 IOPS, 45.83 MiB/s 5920.00 IOPS, 46.25 MiB/s 5926.67 IOPS, 46.30 MiB/s 5937.50 IOPS, 46.39 MiB/s 5956.40 IOPS, 46.53 MiB/s 5965.33 IOPS, 46.60 MiB/s 5971.29 IOPS, 46.65 MiB/s 5971.88 IOPS, 46.66 MiB/s 5973.44 IOPS, 46.67 MiB/s 5974.40 IOPS, 46.67 MiB/s 00:08:31.398 Latency(us) 00:08:31.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.398 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:31.398 Verification LBA range: start 0x0 length 0x1000 00:08:31.398 Nvme1n1 : 10.01 5974.22 46.67 0.00 0.00 21354.40 413.32 35031.97 00:08:31.398 =================================================================================================================== 00:08:31.398 Total : 5974.22 46.67 0.00 0.00 21354.40 413.32 35031.97 00:08:31.398 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65239 00:08:31.398 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:31.398 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:31.398 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:31.398 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:31.398 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:08:31.398 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:08:31.398 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:31.398 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:31.398 { 00:08:31.398 "params": { 00:08:31.398 "name": "Nvme$subsystem", 00:08:31.398 "trtype": "$TEST_TRANSPORT", 00:08:31.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.398 "adrfam": "ipv4", 00:08:31.398 "trsvcid": "$NVMF_PORT", 00:08:31.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.398 "hdgst": ${hdgst:-false}, 00:08:31.398 "ddgst": ${ddgst:-false} 00:08:31.398 }, 00:08:31.398 "method": "bdev_nvme_attach_controller" 00:08:31.398 } 00:08:31.398 EOF 00:08:31.398 )") 00:08:31.398 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:08:31.398 [2024-10-01 13:36:23.091387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.398 [2024-10-01 13:36:23.091586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.398 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:08:31.398 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:08:31.398 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:31.398 "params": { 00:08:31.398 "name": "Nvme1", 00:08:31.398 "trtype": "tcp", 00:08:31.398 "traddr": "10.0.0.3", 00:08:31.398 "adrfam": "ipv4", 00:08:31.398 "trsvcid": "4420", 00:08:31.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.398 "hdgst": false, 00:08:31.398 "ddgst": false 00:08:31.398 }, 00:08:31.398 "method": "bdev_nvme_attach_controller" 00:08:31.398 }' 00:08:31.398 [2024-10-01 13:36:23.103359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.398 [2024-10-01 13:36:23.103513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.398 [2024-10-01 13:36:23.111359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.398 [2024-10-01 13:36:23.111505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.119362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.119509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.127367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.127526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.139378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.139566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.144981] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:08:31.399 [2024-10-01 13:36:23.145260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65239 ] 00:08:31.399 [2024-10-01 13:36:23.147371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.147522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.155372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.155514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.163370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.163512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.171371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.171510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.179372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.179507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.187389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.187546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.195396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.195548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.203382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.203521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.211397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.211599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.219388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.219528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.227390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.227530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.239403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.239563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.247400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.247552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.399 [2024-10-01 13:36:23.255395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.399 [2024-10-01 13:36:23.255532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.658 [2024-10-01 13:36:23.267405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.658 [2024-10-01 13:36:23.267441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.658 [2024-10-01 13:36:23.275402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.658 [2024-10-01 13:36:23.275436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.658 [2024-10-01 13:36:23.283403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.658 [2024-10-01 13:36:23.283435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.658 [2024-10-01 13:36:23.284744] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.658 [2024-10-01 13:36:23.291429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.658 [2024-10-01 13:36:23.291472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.658 [2024-10-01 13:36:23.299424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.658 [2024-10-01 13:36:23.299461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.658 [2024-10-01 13:36:23.311440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.658 [2024-10-01 13:36:23.311485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.658 [2024-10-01 13:36:23.319445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.658 [2024-10-01 13:36:23.319490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.658 [2024-10-01 13:36:23.327431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.658 [2024-10-01 13:36:23.327468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.658 [2024-10-01 13:36:23.335431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.658 [2024-10-01 13:36:23.335466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.658 [2024-10-01 13:36:23.342900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.658 [2024-10-01 13:36:23.343426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.658 [2024-10-01 13:36:23.343457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.658 [2024-10-01 13:36:23.351424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.658 [2024-10-01 13:36:23.351456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.658 [2024-10-01 13:36:23.359453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.658 [2024-10-01 13:36:23.359497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.658 [2024-10-01 13:36:23.367461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.658 [2024-10-01 13:36:23.367507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.375466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.375511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.380774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.659 [2024-10-01 13:36:23.383457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.383493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.391462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.391506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.399462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.399501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.407452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.407489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.415469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.415511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.423473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.423511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.431474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.431511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.439482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.439519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.447498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.447547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.455505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.455557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.463506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.463549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.471517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.471575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.479518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.479562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 Running I/O for 5 seconds... 00:08:31.659 [2024-10-01 13:36:23.487520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.487562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.501655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.501696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.659 [2024-10-01 13:36:23.516306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.659 [2024-10-01 13:36:23.516352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.918 [2024-10-01 13:36:23.532415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.918 [2024-10-01 13:36:23.532462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.918 [2024-10-01 13:36:23.542327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.918 [2024-10-01 13:36:23.542369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.918 [2024-10-01 13:36:23.557527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.918 [2024-10-01 13:36:23.557584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.918 [2024-10-01 13:36:23.567268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.918 [2024-10-01 13:36:23.567308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.918 [2024-10-01 13:36:23.583160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.918 [2024-10-01 13:36:23.583203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.918 [2024-10-01 13:36:23.600425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.918 [2024-10-01 13:36:23.600471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.918 [2024-10-01 13:36:23.610108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.918 [2024-10-01 13:36:23.610149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.918 [2024-10-01 13:36:23.621719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.918 [2024-10-01 13:36:23.621763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.918 [2024-10-01 13:36:23.632551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.918 [2024-10-01 13:36:23.632592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.918 [2024-10-01 13:36:23.648685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.918 [2024-10-01 13:36:23.648727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.918 [2024-10-01 13:36:23.665314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.918 [2024-10-01 13:36:23.665355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.918 [2024-10-01 13:36:23.675358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.919 [2024-10-01 13:36:23.675398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.919 [2024-10-01 13:36:23.690410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.919 [2024-10-01 13:36:23.690453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.919 [2024-10-01 13:36:23.701342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.919 [2024-10-01 13:36:23.701384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.919 [2024-10-01 13:36:23.716122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.919 [2024-10-01 13:36:23.716306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.919 [2024-10-01 13:36:23.732176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.919 [2024-10-01 13:36:23.732225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.919 [2024-10-01 13:36:23.741408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.919 [2024-10-01 13:36:23.741455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.919 [2024-10-01 13:36:23.754703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.919 [2024-10-01 13:36:23.754743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.919 [2024-10-01 13:36:23.765573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.919 [2024-10-01 13:36:23.765612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.780216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.780256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.797612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.797657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.807575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.807618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.822023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.822070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.832310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.832356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.847146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.847193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.864350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.864532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.880693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.880732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.890374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.890422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.905652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.905692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.923215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.923266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.940208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.940383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.955610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.955777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.965643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.965795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.977146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.977315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:23.987677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:23.987834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:24.002562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:24.002732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:24.012989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:24.013148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.178 [2024-10-01 13:36:24.028054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.178 [2024-10-01 13:36:24.028244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.438 [2024-10-01 13:36:24.044514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.438 [2024-10-01 13:36:24.044711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.438 [2024-10-01 13:36:24.054653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.438 [2024-10-01 13:36:24.054808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.438 [2024-10-01 13:36:24.069608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.438 [2024-10-01 13:36:24.069770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.438 [2024-10-01 13:36:24.080265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.438 [2024-10-01 13:36:24.080416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.438 [2024-10-01 13:36:24.095366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.438 [2024-10-01 13:36:24.095561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.438 [2024-10-01 13:36:24.111903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.438 [2024-10-01 13:36:24.112083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.438 [2024-10-01 13:36:24.121765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.438 [2024-10-01 13:36:24.121920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.438 [2024-10-01 13:36:24.136182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.438 [2024-10-01 13:36:24.136343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.438 [2024-10-01 13:36:24.145989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.438 [2024-10-01 13:36:24.146139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.438 [2024-10-01 13:36:24.161282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.438 [2024-10-01 13:36:24.161449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.438 [2024-10-01 13:36:24.177770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.438 [2024-10-01 13:36:24.177942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.438 [2024-10-01 13:36:24.195567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.438 [2024-10-01 13:36:24.195740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.438 [2024-10-01 13:36:24.206615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.439 [2024-10-01 13:36:24.206766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.439 [2024-10-01 13:36:24.224963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.439 [2024-10-01 13:36:24.225136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.439 [2024-10-01 13:36:24.235721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.439 [2024-10-01 13:36:24.235882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.439 [2024-10-01 13:36:24.246981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.439 [2024-10-01 13:36:24.247148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.439 [2024-10-01 13:36:24.258157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.439 [2024-10-01 13:36:24.258308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.439 [2024-10-01 13:36:24.273034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.439 [2024-10-01 13:36:24.273209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.439 [2024-10-01 13:36:24.283914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.439 [2024-10-01 13:36:24.284073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.439 [2024-10-01 13:36:24.298888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.698 [2024-10-01 13:36:24.299070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.698 [2024-10-01 13:36:24.315677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.698 [2024-10-01 13:36:24.315721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.698 [2024-10-01 13:36:24.333124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.698 [2024-10-01 13:36:24.333168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.698 [2024-10-01 13:36:24.347821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.698 [2024-10-01 13:36:24.347863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 [2024-10-01 13:36:24.365242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.365287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 [2024-10-01 13:36:24.375881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.375922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 [2024-10-01 13:36:24.386769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.386809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 [2024-10-01 13:36:24.399601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.399641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 [2024-10-01 13:36:24.417074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.417116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 [2024-10-01 13:36:24.433687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.433728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 [2024-10-01 13:36:24.443457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.443500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 [2024-10-01 13:36:24.454959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.455012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 [2024-10-01 13:36:24.465972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.466012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 [2024-10-01 13:36:24.484253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.484295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 11443.00 IOPS, 89.40 MiB/s [2024-10-01 13:36:24.499385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.499563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 [2024-10-01 13:36:24.509656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.509808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 [2024-10-01 13:36:24.521837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.521992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 [2024-10-01 13:36:24.533021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.533189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 [2024-10-01 13:36:24.548466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.548659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.699 [2024-10-01 13:36:24.558787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.699 [2024-10-01 13:36:24.558952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.959 [2024-10-01 13:36:24.574181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.959 [2024-10-01 13:36:24.574358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.959 [2024-10-01 13:36:24.584188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.959 [2024-10-01 13:36:24.584342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.959 [2024-10-01 13:36:24.599017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.959 [2024-10-01 13:36:24.599057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.959 [2024-10-01 13:36:24.614931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.959 [2024-10-01 13:36:24.614975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.959 [2024-10-01 13:36:24.624215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.959 [2024-10-01 13:36:24.624254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.959 [2024-10-01 13:36:24.641199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.959 [2024-10-01 13:36:24.641242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.959 [2024-10-01 13:36:24.657518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.959 [2024-10-01 13:36:24.657576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.959 [2024-10-01 13:36:24.673945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.959 [2024-10-01 13:36:24.673989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.959 [2024-10-01 13:36:24.684087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.959 [2024-10-01 13:36:24.684133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.959 [2024-10-01 13:36:24.698819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.959 [2024-10-01 13:36:24.698861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.959 [2024-10-01 13:36:24.709204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.959 [2024-10-01 13:36:24.709364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.959 [2024-10-01 13:36:24.724337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.959 [2024-10-01 13:36:24.724513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.959 [2024-10-01 13:36:24.742953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.959 [2024-10-01 13:36:24.743122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.959 [2024-10-01 13:36:24.758137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.959 [2024-10-01 13:36:24.758299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.960 [2024-10-01 13:36:24.767971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.960 [2024-10-01 13:36:24.768121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.960 [2024-10-01 13:36:24.784120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.960 [2024-10-01 13:36:24.784275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.960 [2024-10-01 13:36:24.794550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.960 [2024-10-01 13:36:24.794716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.960 [2024-10-01 13:36:24.805641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.960 [2024-10-01 13:36:24.805789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.219 [2024-10-01 13:36:24.822756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.219 [2024-10-01 13:36:24.822914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.219 [2024-10-01 13:36:24.841391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.219 [2024-10-01 13:36:24.841583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.219 [2024-10-01 13:36:24.856745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:24.856916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:24.866820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:24.866977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:24.878443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:24.878611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:24.889411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:24.889581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:24.906679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:24.906839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:24.924299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:24.924462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:24.934615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:24.934773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:24.946628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:24.946785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:24.962009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:24.962163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:24.978417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:24.978581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:24.988249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:24.988395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:25.004376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:25.004526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:25.021072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:25.021240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:25.031573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:25.031732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:25.046389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:25.046567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:25.056495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:25.056662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:25.068603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:25.068751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.220 [2024-10-01 13:36:25.079780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.220 [2024-10-01 13:36:25.079928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.091155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.091328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.106870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.107033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.117801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.117949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.132775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.132810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.149953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.149994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.160475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.160671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.172843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.172883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.183927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.184086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.200076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.200250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.216414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.216631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.226232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.226464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.238890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.239103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.254631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.254945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.271066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.271365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.288274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.288595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.298460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.298769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.313643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.313916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.323882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.324135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.480 [2024-10-01 13:36:25.338103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.480 [2024-10-01 13:36:25.338338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.348754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.349039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.363772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.363934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.381652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.381981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.392358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.392391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.408087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.408133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.423929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.424028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.440464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.440499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.450604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.450681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.466265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.466299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.476831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.476866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 11369.00 IOPS, 88.82 MiB/s [2024-10-01 13:36:25.492192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.492428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.508703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.508741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.518540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.518648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.531035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.531089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.543104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.543137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.555028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.555094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.572086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.572124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.588875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.588914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.740 [2024-10-01 13:36:25.599158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.740 [2024-10-01 13:36:25.599195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.611177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.611213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.622233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.622412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.638886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.638924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.649024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.649234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.663435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.663470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.681048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.681101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.691745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.691790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.707245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.707440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.718333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.718521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.730395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.730433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.742327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.742389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.758218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.758252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.768209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.768259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.780673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.780727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.796419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.796664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.812784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.812821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.822857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.822894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.835266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.835299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.000 [2024-10-01 13:36:25.846369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.000 [2024-10-01 13:36:25.846406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:25.864354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:25.864388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:25.880671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:25.880705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:25.891373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:25.891411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:25.906459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:25.906632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:25.917496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:25.917558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:25.928817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:25.928855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:25.940511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:25.940591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:25.952557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:25.952649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:25.969469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:25.969504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:25.985670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:25.985737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:25.996120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:25.996153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:26.011534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:26.011593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:26.021587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:26.021790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:26.034026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:26.034190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:26.045249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:26.045432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:26.062123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:26.062308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:26.078019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:26.078253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:26.088009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:26.088199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:26.100137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:26.100322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.260 [2024-10-01 13:36:26.111384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.260 [2024-10-01 13:36:26.111590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.519 [2024-10-01 13:36:26.123330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.519 [2024-10-01 13:36:26.123481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.519 [2024-10-01 13:36:26.134565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.519 [2024-10-01 13:36:26.134716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.519 [2024-10-01 13:36:26.150629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.519 [2024-10-01 13:36:26.150780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.519 [2024-10-01 13:36:26.168116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.519 [2024-10-01 13:36:26.168266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.519 [2024-10-01 13:36:26.184361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.520 [2024-10-01 13:36:26.184512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.520 [2024-10-01 13:36:26.194182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.520 [2024-10-01 13:36:26.194330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.520 [2024-10-01 13:36:26.206343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.520 [2024-10-01 13:36:26.206491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.520 [2024-10-01 13:36:26.217740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.520 [2024-10-01 13:36:26.217886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.520 [2024-10-01 13:36:26.233872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.520 [2024-10-01 13:36:26.234020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.520 [2024-10-01 13:36:26.250832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.520 [2024-10-01 13:36:26.250991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.520 [2024-10-01 13:36:26.260749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.520 [2024-10-01 13:36:26.260898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.520 [2024-10-01 13:36:26.275609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.520 [2024-10-01 13:36:26.275759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.520 [2024-10-01 13:36:26.291682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.520 [2024-10-01 13:36:26.291830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.520 [2024-10-01 13:36:26.301017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.520 [2024-10-01 13:36:26.301164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.520 [2024-10-01 13:36:26.313266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.520 [2024-10-01 13:36:26.313414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.520 [2024-10-01 13:36:26.329374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.520 [2024-10-01 13:36:26.329581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.520 [2024-10-01 13:36:26.346958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.520 [2024-10-01 13:36:26.347154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.520 [2024-10-01 13:36:26.357398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.520 [2024-10-01 13:36:26.357432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.520 [2024-10-01 13:36:26.371854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.520 [2024-10-01 13:36:26.371892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.779 [2024-10-01 13:36:26.385153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.779 [2024-10-01 13:36:26.385186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.394826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.394862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.406572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.406619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.421059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.421100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.430714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.430749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.445611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.445664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.456109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.456144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.470343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.470517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.486228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 11280.00 IOPS, 88.12 MiB/s [2024-10-01 13:36:26.486405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.496454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.496680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.508678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.508869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.520312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.520487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.530801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.530982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.545456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.545648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.562038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.562205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.578930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.579227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.588338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.588519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.600076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.600254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.611279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.611325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.621953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.621987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.780 [2024-10-01 13:36:26.636289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.780 [2024-10-01 13:36:26.636326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.646516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.646592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.661527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.661805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.672001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.672055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.687044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.687099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.702442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.702482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.712390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.712428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.724773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.724812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.735701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.735739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.752687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.752725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.771316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.771512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.782756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.782794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.799037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.799106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.809106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.809158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.821343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.821379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.832841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.832883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.850328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.850503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.866483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.866523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.876419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.876622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.039 [2024-10-01 13:36:26.888680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.039 [2024-10-01 13:36:26.888717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:26.899726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:26.899763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:26.913033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:26.913098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:26.929815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:26.929854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:26.946152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:26.946189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:26.956058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:26.956094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:26.968660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:26.968697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:26.983725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:26.983762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:26.999922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:26.999959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:27.009799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:27.009834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:27.024304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:27.024339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:27.034254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:27.034289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:27.048824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:27.048861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:27.059374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:27.059589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:27.070391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:27.070426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:27.088030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:27.088065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:27.097438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:27.097473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:27.110960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:27.111151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:27.121704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:27.121739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:27.133580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:27.133790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:27.143561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:27.143635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.299 [2024-10-01 13:36:27.158118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.299 [2024-10-01 13:36:27.158153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.174666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.174716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.184568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.184757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.199303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.199486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.215979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.216156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.225621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.225793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.241616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.241791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.252378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.252591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.267234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.267410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.284878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.285077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.295229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.295447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.306204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.306386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.319338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.319526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.337792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.337974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.353933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.354146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.371176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.371368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.381219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.381401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.392214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.392415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.558 [2024-10-01 13:36:27.405506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.558 [2024-10-01 13:36:27.405725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.421819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.422008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.439460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.439657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.449652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.449799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.464674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.464850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.481716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.481919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 11310.75 IOPS, 88.37 MiB/s [2024-10-01 13:36:27.498658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.498808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.515017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.515199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.525375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.525524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.541011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.541172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.556109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.556279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.566007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.566237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.578435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.578620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.588846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.589047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.599406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.599612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.610348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.610531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.626793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.626994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.642832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.643004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.660135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.660302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.818 [2024-10-01 13:36:27.670434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.818 [2024-10-01 13:36:27.670623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.682519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.682735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.698030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.698236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.714301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.714478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.732244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.732414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.742923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.743084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.757378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.757577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.767874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.768074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.782928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.783103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.798950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.799124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.808091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.808266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.821476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.821696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.836307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.836508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.853499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.853703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.867683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.867843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.883346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.883516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.893531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.893715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.905575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.905609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.916180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.916216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.926707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.078 [2024-10-01 13:36:27.926768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.078 [2024-10-01 13:36:27.937882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:27.938031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:27.950821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:27.950858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:27.969025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:27.969216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:27.984504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:27.984676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:27.994469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:27.994508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:28.006196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:28.006232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:28.017209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:28.017245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:28.031972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:28.032165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:28.041825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:28.041861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:28.053215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:28.053402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:28.069227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:28.069263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:28.086679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:28.086748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:28.097316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:28.097353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:28.112135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:28.112312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:28.128143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:28.128180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:28.137425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:28.137460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:28.151219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:28.151382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:28.165779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:28.165963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:28.181527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:28.181747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.337 [2024-10-01 13:36:28.191449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.337 [2024-10-01 13:36:28.191643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.206807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.206982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.217194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.217357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.232190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.232350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.242995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.243144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.258342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.258545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.274781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.274956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.284504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.284721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.299221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.299390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.315501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.315715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.332680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.332881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.343224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.343394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.357695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.357936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.367192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.367360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.378698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.378872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.395127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.395385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.412056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.412265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.427922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.428128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.596 [2024-10-01 13:36:28.445423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.596 [2024-10-01 13:36:28.445620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.462035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.462218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.471936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.472137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 11343.20 IOPS, 88.62 MiB/s [2024-10-01 13:36:28.486548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.486714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 00:08:36.856 Latency(us) 00:08:36.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.856 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:36.856 Nvme1n1 : 5.01 11345.41 88.64 0.00 0.00 11269.28 3872.58 23235.49 00:08:36.856 =================================================================================================================== 00:08:36.856 Total : 11345.41 88.64 0.00 0.00 11269.28 3872.58 23235.49 00:08:36.856 [2024-10-01 13:36:28.498469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.498659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.506472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.506658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.514478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.514690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.526510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.526828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.534498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.534595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.542499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.542555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.554528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.554580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.566532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.566609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.574508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.574554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.582494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.582528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.590488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.590518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.598495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.598527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.606537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.606616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.614504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.614564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.622493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.622522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.630503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.630728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.638517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.638749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.646519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.646575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.654501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.654530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 [2024-10-01 13:36:28.662500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.856 [2024-10-01 13:36:28.662528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.856 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65239) - No such process 00:08:36.856 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65239 00:08:36.856 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.856 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.856 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.856 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.856 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:36.856 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.856 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.856 delay0 00:08:36.856 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.856 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:36.856 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.856 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.856 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.856 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:08:37.115 [2024-10-01 13:36:28.861970] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:43.775 Initializing NVMe Controllers 00:08:43.775 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:43.775 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:43.775 Initialization complete. Launching workers. 00:08:43.775 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 91 00:08:43.775 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 378, failed to submit 33 00:08:43.775 success 256, unsuccessful 122, failed 0 00:08:43.775 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:43.775 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:43.775 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:43.775 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:43.775 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:43.775 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:43.775 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.775 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:43.775 rmmod nvme_tcp 00:08:43.775 rmmod nvme_fabrics 00:08:43.775 rmmod nvme_keyring 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 65083 ']' 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 65083 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 65083 ']' 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 65083 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65083 00:08:43.775 killing process with pid 65083 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65083' 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 65083 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 65083 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:08:43.775 00:08:43.775 real 0m24.759s 00:08:43.775 user 0m40.471s 00:08:43.775 sys 0m6.413s 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.775 ************************************ 00:08:43.775 END TEST nvmf_zcopy 00:08:43.775 ************************************ 00:08:43.775 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.776 13:36:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:43.776 13:36:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:43.776 13:36:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.776 13:36:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:43.776 ************************************ 00:08:43.776 START TEST nvmf_nmic 00:08:43.776 ************************************ 00:08:43.776 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:43.776 * Looking for test storage... 00:08:43.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:43.776 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:43.776 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:08:43.776 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:44.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.035 --rc genhtml_branch_coverage=1 00:08:44.035 --rc genhtml_function_coverage=1 00:08:44.035 --rc genhtml_legend=1 00:08:44.035 --rc geninfo_all_blocks=1 00:08:44.035 --rc geninfo_unexecuted_blocks=1 00:08:44.035 00:08:44.035 ' 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:44.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.035 --rc genhtml_branch_coverage=1 00:08:44.035 --rc genhtml_function_coverage=1 00:08:44.035 --rc genhtml_legend=1 00:08:44.035 --rc geninfo_all_blocks=1 00:08:44.035 --rc geninfo_unexecuted_blocks=1 00:08:44.035 00:08:44.035 ' 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:44.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.035 --rc genhtml_branch_coverage=1 00:08:44.035 --rc genhtml_function_coverage=1 00:08:44.035 --rc genhtml_legend=1 00:08:44.035 --rc geninfo_all_blocks=1 00:08:44.035 --rc geninfo_unexecuted_blocks=1 00:08:44.035 00:08:44.035 ' 00:08:44.035 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:44.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.035 --rc genhtml_branch_coverage=1 00:08:44.035 --rc genhtml_function_coverage=1 00:08:44.035 --rc genhtml_legend=1 00:08:44.035 --rc geninfo_all_blocks=1 00:08:44.035 --rc geninfo_unexecuted_blocks=1 00:08:44.035 00:08:44.035 ' 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.036 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:44.036 Cannot find device "nvmf_init_br" 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:44.036 Cannot find device "nvmf_init_br2" 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:44.036 Cannot find device "nvmf_tgt_br" 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:44.036 Cannot find device "nvmf_tgt_br2" 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:44.036 Cannot find device "nvmf_init_br" 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:44.036 Cannot find device "nvmf_init_br2" 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:44.036 Cannot find device "nvmf_tgt_br" 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:44.036 Cannot find device "nvmf_tgt_br2" 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:44.036 Cannot find device "nvmf_br" 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:44.036 Cannot find device "nvmf_init_if" 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:44.036 Cannot find device "nvmf_init_if2" 00:08:44.036 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:08:44.037 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:44.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.037 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:08:44.037 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:44.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.037 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:08:44.037 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:44.037 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:44.037 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:44.037 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:44.037 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:44.295 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:44.295 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:44.295 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:44.295 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:44.295 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:44.295 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:44.295 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:44.295 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:44.295 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:44.295 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:44.295 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:44.295 13:36:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:44.295 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:44.295 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:08:44.295 00:08:44.295 --- 10.0.0.3 ping statistics --- 00:08:44.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.295 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:44.295 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:44.295 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:08:44.295 00:08:44.295 --- 10.0.0.4 ping statistics --- 00:08:44.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.295 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:44.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:44.295 00:08:44.295 --- 10.0.0.1 ping statistics --- 00:08:44.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.295 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:44.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:08:44.295 00:08:44.295 --- 10.0.0.2 ping statistics --- 00:08:44.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.295 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:08:44.295 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=65613 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 65613 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 65613 ']' 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.296 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.553 [2024-10-01 13:36:36.205881] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:08:44.554 [2024-10-01 13:36:36.205990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.554 [2024-10-01 13:36:36.350200] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.812 [2024-10-01 13:36:36.424106] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.812 [2024-10-01 13:36:36.424169] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.812 [2024-10-01 13:36:36.424183] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.812 [2024-10-01 13:36:36.424193] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.812 [2024-10-01 13:36:36.424202] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.812 [2024-10-01 13:36:36.424303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.812 [2024-10-01 13:36:36.424344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.812 [2024-10-01 13:36:36.424496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.812 [2024-10-01 13:36:36.425074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.812 [2024-10-01 13:36:36.462779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.812 [2024-10-01 13:36:36.563813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.812 Malloc0 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.812 [2024-10-01 13:36:36.612406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:44.812 test case1: single bdev can't be used in multiple subsystems 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.812 [2024-10-01 13:36:36.644249] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:44.812 [2024-10-01 13:36:36.644290] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:44.812 [2024-10-01 13:36:36.644303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.812 request: 00:08:44.812 { 00:08:44.812 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:44.812 "namespace": { 00:08:44.812 "bdev_name": "Malloc0", 00:08:44.812 "no_auto_visible": false 00:08:44.812 }, 00:08:44.812 "method": "nvmf_subsystem_add_ns", 00:08:44.812 "req_id": 1 00:08:44.812 } 00:08:44.812 Got JSON-RPC error response 00:08:44.812 response: 00:08:44.812 { 00:08:44.812 "code": -32602, 00:08:44.812 "message": "Invalid parameters" 00:08:44.812 } 00:08:44.812 Adding namespace failed - expected result. 00:08:44.812 test case2: host connect to nvmf target in multiple paths 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.812 [2024-10-01 13:36:36.656355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.812 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid=2b7d6042-0a58-4103-9990-589a1a785035 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:45.070 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid=2b7d6042-0a58-4103-9990-589a1a785035 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:08:45.070 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:45.070 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:08:45.070 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:45.070 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:45.070 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:08:47.596 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:47.596 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:47.596 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:47.596 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:47.596 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:47.596 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:08:47.596 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:47.596 [global] 00:08:47.596 thread=1 00:08:47.596 invalidate=1 00:08:47.596 rw=write 00:08:47.596 time_based=1 00:08:47.596 runtime=1 00:08:47.596 ioengine=libaio 00:08:47.596 direct=1 00:08:47.596 bs=4096 00:08:47.596 iodepth=1 00:08:47.596 norandommap=0 00:08:47.596 numjobs=1 00:08:47.596 00:08:47.596 verify_dump=1 00:08:47.596 verify_backlog=512 00:08:47.596 verify_state_save=0 00:08:47.596 do_verify=1 00:08:47.596 verify=crc32c-intel 00:08:47.596 [job0] 00:08:47.596 filename=/dev/nvme0n1 00:08:47.596 Could not set queue depth (nvme0n1) 00:08:47.596 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.596 fio-3.35 00:08:47.596 Starting 1 thread 00:08:48.580 00:08:48.580 job0: (groupid=0, jobs=1): err= 0: pid=65692: Tue Oct 1 13:36:40 2024 00:08:48.580 read: IOPS=2783, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:08:48.580 slat (nsec): min=13756, max=50822, avg=17587.50, stdev=5023.60 00:08:48.580 clat (usec): min=143, max=286, avg=183.77, stdev=18.68 00:08:48.580 lat (usec): min=158, max=314, avg=201.35, stdev=20.14 00:08:48.580 clat percentiles (usec): 00:08:48.580 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:08:48.580 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:08:48.580 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 219], 00:08:48.580 | 99.00th=[ 243], 99.50th=[ 258], 99.90th=[ 281], 99.95th=[ 285], 00:08:48.580 | 99.99th=[ 289] 00:08:48.580 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:48.580 slat (usec): min=17, max=112, avg=24.69, stdev= 6.23 00:08:48.580 clat (usec): min=88, max=586, avg=114.14, stdev=19.12 00:08:48.580 lat (usec): min=109, max=610, avg=138.83, stdev=21.47 00:08:48.580 clat percentiles (usec): 00:08:48.580 | 1.00th=[ 92], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 101], 00:08:48.580 | 30.00th=[ 104], 40.00th=[ 108], 50.00th=[ 111], 60.00th=[ 114], 00:08:48.580 | 70.00th=[ 118], 80.00th=[ 125], 90.00th=[ 137], 95.00th=[ 149], 00:08:48.580 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 253], 99.95th=[ 277], 00:08:48.580 | 99.99th=[ 586] 00:08:48.580 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:08:48.580 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:48.580 lat (usec) : 100=8.76%, 250=90.87%, 500=0.36%, 750=0.02% 00:08:48.580 cpu : usr=2.50%, sys=10.20%, ctx=5858, majf=0, minf=5 00:08:48.580 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:48.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.580 issued rwts: total=2786,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.580 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:48.580 00:08:48.580 Run status group 0 (all jobs): 00:08:48.580 READ: bw=10.9MiB/s (11.4MB/s), 10.9MiB/s-10.9MiB/s (11.4MB/s-11.4MB/s), io=10.9MiB (11.4MB), run=1001-1001msec 00:08:48.580 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:08:48.580 00:08:48.580 Disk stats (read/write): 00:08:48.580 nvme0n1: ios=2610/2664, merge=0/0, ticks=487/337, in_queue=824, util=91.28% 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:48.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.580 rmmod nvme_tcp 00:08:48.580 rmmod nvme_fabrics 00:08:48.580 rmmod nvme_keyring 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 65613 ']' 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 65613 00:08:48.580 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 65613 ']' 00:08:48.581 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 65613 00:08:48.581 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:08:48.581 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.581 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65613 00:08:48.839 killing process with pid 65613 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65613' 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 65613 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 65613 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:48.839 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:08:49.098 00:08:49.098 real 0m5.411s 00:08:49.098 user 0m15.672s 00:08:49.098 sys 0m2.256s 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.098 ************************************ 00:08:49.098 END TEST nvmf_nmic 00:08:49.098 ************************************ 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.098 13:36:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.357 ************************************ 00:08:49.357 START TEST nvmf_fio_target 00:08:49.357 ************************************ 00:08:49.357 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:49.357 * Looking for test storage... 00:08:49.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:49.357 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:49.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.358 --rc genhtml_branch_coverage=1 00:08:49.358 --rc genhtml_function_coverage=1 00:08:49.358 --rc genhtml_legend=1 00:08:49.358 --rc geninfo_all_blocks=1 00:08:49.358 --rc geninfo_unexecuted_blocks=1 00:08:49.358 00:08:49.358 ' 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:49.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.358 --rc genhtml_branch_coverage=1 00:08:49.358 --rc genhtml_function_coverage=1 00:08:49.358 --rc genhtml_legend=1 00:08:49.358 --rc geninfo_all_blocks=1 00:08:49.358 --rc geninfo_unexecuted_blocks=1 00:08:49.358 00:08:49.358 ' 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:49.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.358 --rc genhtml_branch_coverage=1 00:08:49.358 --rc genhtml_function_coverage=1 00:08:49.358 --rc genhtml_legend=1 00:08:49.358 --rc geninfo_all_blocks=1 00:08:49.358 --rc geninfo_unexecuted_blocks=1 00:08:49.358 00:08:49.358 ' 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:49.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.358 --rc genhtml_branch_coverage=1 00:08:49.358 --rc genhtml_function_coverage=1 00:08:49.358 --rc genhtml_legend=1 00:08:49.358 --rc geninfo_all_blocks=1 00:08:49.358 --rc geninfo_unexecuted_blocks=1 00:08:49.358 00:08:49.358 ' 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.358 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:49.358 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:49.359 Cannot find device "nvmf_init_br" 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:49.359 Cannot find device "nvmf_init_br2" 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:08:49.359 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:49.617 Cannot find device "nvmf_tgt_br" 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:49.618 Cannot find device "nvmf_tgt_br2" 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:49.618 Cannot find device "nvmf_init_br" 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:49.618 Cannot find device "nvmf_init_br2" 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:49.618 Cannot find device "nvmf_tgt_br" 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:49.618 Cannot find device "nvmf_tgt_br2" 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:49.618 Cannot find device "nvmf_br" 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:49.618 Cannot find device "nvmf_init_if" 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:49.618 Cannot find device "nvmf_init_if2" 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:49.618 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:49.876 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:49.876 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:08:49.876 00:08:49.876 --- 10.0.0.3 ping statistics --- 00:08:49.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.876 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:49.876 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:49.876 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:08:49.876 00:08:49.876 --- 10.0.0.4 ping statistics --- 00:08:49.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.876 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:49.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:49.876 00:08:49.876 --- 10.0.0.1 ping statistics --- 00:08:49.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.876 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:49.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:08:49.876 00:08:49.876 --- 10.0.0.2 ping statistics --- 00:08:49.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.876 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=65928 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 65928 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 65928 ']' 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.876 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:49.876 [2024-10-01 13:36:41.671212] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:08:49.876 [2024-10-01 13:36:41.671296] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.135 [2024-10-01 13:36:41.810197] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.135 [2024-10-01 13:36:41.880253] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.135 [2024-10-01 13:36:41.880312] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.135 [2024-10-01 13:36:41.880326] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.135 [2024-10-01 13:36:41.880336] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.135 [2024-10-01 13:36:41.880345] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.135 [2024-10-01 13:36:41.880419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.135 [2024-10-01 13:36:41.880561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.135 [2024-10-01 13:36:41.881250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.135 [2024-10-01 13:36:41.881292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.135 [2024-10-01 13:36:41.914485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.135 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.135 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:08:50.135 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:50.135 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:50.135 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:50.392 13:36:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.392 13:36:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:50.649 [2024-10-01 13:36:42.304126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.649 13:36:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.907 13:36:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:50.907 13:36:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.165 13:36:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:51.165 13:36:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.424 13:36:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:51.424 13:36:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.683 13:36:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:51.683 13:36:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:51.941 13:36:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.199 13:36:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:52.199 13:36:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.767 13:36:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:52.767 13:36:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.026 13:36:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:53.026 13:36:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:53.284 13:36:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:53.542 13:36:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:53.542 13:36:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:53.801 13:36:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:53.801 13:36:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:54.059 13:36:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:54.317 [2024-10-01 13:36:46.052446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:54.317 13:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:54.884 13:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:54.884 13:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid=2b7d6042-0a58-4103-9990-589a1a785035 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:55.142 13:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:55.142 13:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:08:55.142 13:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:55.142 13:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:08:55.142 13:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:08:55.142 13:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:08:57.047 13:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:57.047 13:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:57.047 13:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:57.047 13:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:08:57.047 13:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:57.047 13:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:08:57.047 13:36:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:57.047 [global] 00:08:57.047 thread=1 00:08:57.047 invalidate=1 00:08:57.047 rw=write 00:08:57.047 time_based=1 00:08:57.047 runtime=1 00:08:57.047 ioengine=libaio 00:08:57.047 direct=1 00:08:57.047 bs=4096 00:08:57.047 iodepth=1 00:08:57.047 norandommap=0 00:08:57.047 numjobs=1 00:08:57.047 00:08:57.047 verify_dump=1 00:08:57.047 verify_backlog=512 00:08:57.047 verify_state_save=0 00:08:57.047 do_verify=1 00:08:57.047 verify=crc32c-intel 00:08:57.047 [job0] 00:08:57.047 filename=/dev/nvme0n1 00:08:57.047 [job1] 00:08:57.047 filename=/dev/nvme0n2 00:08:57.047 [job2] 00:08:57.047 filename=/dev/nvme0n3 00:08:57.047 [job3] 00:08:57.047 filename=/dev/nvme0n4 00:08:57.307 Could not set queue depth (nvme0n1) 00:08:57.307 Could not set queue depth (nvme0n2) 00:08:57.307 Could not set queue depth (nvme0n3) 00:08:57.307 Could not set queue depth (nvme0n4) 00:08:57.307 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.307 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.307 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.307 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.307 fio-3.35 00:08:57.307 Starting 4 threads 00:08:58.683 00:08:58.684 job0: (groupid=0, jobs=1): err= 0: pid=66110: Tue Oct 1 13:36:50 2024 00:08:58.684 read: IOPS=1843, BW=7373KiB/s (7550kB/s)(7380KiB/1001msec) 00:08:58.684 slat (usec): min=14, max=810, avg=18.26, stdev=18.96 00:08:58.684 clat (usec): min=4, max=7425, avg=284.46, stdev=217.79 00:08:58.684 lat (usec): min=170, max=7447, avg=302.72, stdev=218.77 00:08:58.684 clat percentiles (usec): 00:08:58.684 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:08:58.684 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:08:58.684 | 70.00th=[ 277], 80.00th=[ 302], 90.00th=[ 355], 95.00th=[ 412], 00:08:58.684 | 99.00th=[ 461], 99.50th=[ 469], 99.90th=[ 5800], 99.95th=[ 7439], 00:08:58.684 | 99.99th=[ 7439] 00:08:58.684 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:58.684 slat (usec): min=20, max=195, avg=26.45, stdev= 9.34 00:08:58.684 clat (usec): min=104, max=2469, avg=184.89, stdev=80.09 00:08:58.684 lat (usec): min=130, max=2498, avg=211.34, stdev=84.96 00:08:58.684 clat percentiles (usec): 00:08:58.684 | 1.00th=[ 112], 5.00th=[ 119], 10.00th=[ 123], 20.00th=[ 131], 00:08:58.684 | 30.00th=[ 143], 40.00th=[ 165], 50.00th=[ 176], 60.00th=[ 184], 00:08:58.684 | 70.00th=[ 194], 80.00th=[ 210], 90.00th=[ 289], 95.00th=[ 326], 00:08:58.684 | 99.00th=[ 371], 99.50th=[ 412], 99.90th=[ 586], 99.95th=[ 635], 00:08:58.684 | 99.99th=[ 2474] 00:08:58.684 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:08:58.684 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:58.684 lat (usec) : 10=0.03%, 250=62.42%, 500=37.30%, 750=0.18% 00:08:58.684 lat (msec) : 4=0.03%, 10=0.05% 00:08:58.684 cpu : usr=2.10%, sys=6.70%, ctx=3893, majf=0, minf=11 00:08:58.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.684 issued rwts: total=1845,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.684 job1: (groupid=0, jobs=1): err= 0: pid=66111: Tue Oct 1 13:36:50 2024 00:08:58.684 read: IOPS=1913, BW=7652KiB/s (7836kB/s)(7652KiB/1000msec) 00:08:58.684 slat (nsec): min=13340, max=69530, avg=18700.55, stdev=6782.34 00:08:58.684 clat (usec): min=151, max=1502, avg=290.26, stdev=85.67 00:08:58.684 lat (usec): min=165, max=1516, avg=308.96, stdev=89.59 00:08:58.684 clat percentiles (usec): 00:08:58.684 | 1.00th=[ 188], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 241], 00:08:58.684 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:08:58.684 | 70.00th=[ 277], 80.00th=[ 310], 90.00th=[ 457], 95.00th=[ 474], 00:08:58.684 | 99.00th=[ 502], 99.50th=[ 510], 99.90th=[ 611], 99.95th=[ 1500], 00:08:58.684 | 99.99th=[ 1500] 00:08:58.684 write: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec); 0 zone resets 00:08:58.684 slat (usec): min=16, max=207, avg=24.86, stdev= 9.50 00:08:58.684 clat (usec): min=93, max=628, avg=170.41, stdev=42.18 00:08:58.684 lat (usec): min=114, max=820, avg=195.27, stdev=45.89 00:08:58.684 clat percentiles (usec): 00:08:58.684 | 1.00th=[ 103], 5.00th=[ 112], 10.00th=[ 117], 20.00th=[ 128], 00:08:58.684 | 30.00th=[ 147], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 182], 00:08:58.684 | 70.00th=[ 190], 80.00th=[ 200], 90.00th=[ 215], 95.00th=[ 231], 00:08:58.684 | 99.00th=[ 302], 99.50th=[ 334], 99.90th=[ 424], 99.95th=[ 453], 00:08:58.684 | 99.99th=[ 627] 00:08:58.684 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:08:58.684 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:58.684 lat (usec) : 100=0.20%, 250=67.71%, 500=31.56%, 750=0.50% 00:08:58.684 lat (msec) : 2=0.03% 00:08:58.684 cpu : usr=1.90%, sys=6.90%, ctx=3964, majf=0, minf=11 00:08:58.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.684 issued rwts: total=1913,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.684 job2: (groupid=0, jobs=1): err= 0: pid=66113: Tue Oct 1 13:36:50 2024 00:08:58.684 read: IOPS=1623, BW=6494KiB/s (6649kB/s)(6500KiB/1001msec) 00:08:58.684 slat (nsec): min=14542, max=46653, avg=16930.08, stdev=3292.47 00:08:58.684 clat (usec): min=158, max=909, avg=286.20, stdev=47.48 00:08:58.684 lat (usec): min=175, max=924, avg=303.13, stdev=48.57 00:08:58.684 clat percentiles (usec): 00:08:58.684 | 1.00th=[ 196], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 262], 00:08:58.684 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:08:58.684 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 371], 00:08:58.684 | 99.00th=[ 486], 99.50th=[ 537], 99.90th=[ 627], 99.95th=[ 914], 00:08:58.684 | 99.99th=[ 914] 00:08:58.684 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:58.684 slat (usec): min=19, max=182, avg=26.98, stdev= 9.36 00:08:58.684 clat (usec): min=117, max=616, avg=216.97, stdev=43.79 00:08:58.684 lat (usec): min=139, max=798, avg=243.96, stdev=49.39 00:08:58.684 clat percentiles (usec): 00:08:58.684 | 1.00th=[ 126], 5.00th=[ 145], 10.00th=[ 188], 20.00th=[ 196], 00:08:58.684 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 217], 00:08:58.684 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 285], 00:08:58.684 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 494], 99.95th=[ 603], 00:08:58.684 | 99.99th=[ 619] 00:08:58.684 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:08:58.684 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:58.684 lat (usec) : 250=52.27%, 500=47.29%, 750=0.41%, 1000=0.03% 00:08:58.684 cpu : usr=2.30%, sys=5.90%, ctx=3675, majf=0, minf=15 00:08:58.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.684 issued rwts: total=1625,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.684 job3: (groupid=0, jobs=1): err= 0: pid=66114: Tue Oct 1 13:36:50 2024 00:08:58.684 read: IOPS=1640, BW=6561KiB/s (6719kB/s)(6568KiB/1001msec) 00:08:58.684 slat (nsec): min=14270, max=86721, avg=18726.67, stdev=5277.60 00:08:58.684 clat (usec): min=162, max=2135, avg=289.52, stdev=79.90 00:08:58.684 lat (usec): min=182, max=2157, avg=308.24, stdev=80.80 00:08:58.684 clat percentiles (usec): 00:08:58.684 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 260], 00:08:58.684 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:08:58.684 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 367], 00:08:58.684 | 99.00th=[ 519], 99.50th=[ 537], 99.90th=[ 1745], 99.95th=[ 2147], 00:08:58.684 | 99.99th=[ 2147] 00:08:58.684 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:58.684 slat (usec): min=20, max=207, avg=28.11, stdev= 9.95 00:08:58.684 clat (usec): min=109, max=645, avg=208.94, stdev=32.71 00:08:58.684 lat (usec): min=131, max=852, avg=237.04, stdev=37.03 00:08:58.684 clat percentiles (usec): 00:08:58.684 | 1.00th=[ 126], 5.00th=[ 145], 10.00th=[ 184], 20.00th=[ 194], 00:08:58.684 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:08:58.684 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 253], 00:08:58.684 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 482], 99.95th=[ 529], 00:08:58.684 | 99.99th=[ 644] 00:08:58.684 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:08:58.684 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:58.684 lat (usec) : 250=55.31%, 500=43.66%, 750=0.92%, 1000=0.05% 00:08:58.684 lat (msec) : 2=0.03%, 4=0.03% 00:08:58.684 cpu : usr=1.90%, sys=6.90%, ctx=3690, majf=0, minf=5 00:08:58.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.684 issued rwts: total=1642,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.684 00:08:58.684 Run status group 0 (all jobs): 00:08:58.684 READ: bw=27.4MiB/s (28.7MB/s), 6494KiB/s-7652KiB/s (6649kB/s-7836kB/s), io=27.4MiB (28.8MB), run=1000-1001msec 00:08:58.684 WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8192KiB/s (8380kB/s-8389kB/s), io=32.0MiB (33.6MB), run=1000-1001msec 00:08:58.684 00:08:58.684 Disk stats (read/write): 00:08:58.684 nvme0n1: ios=1586/1673, merge=0/0, ticks=466/337, in_queue=803, util=86.06% 00:08:58.684 nvme0n2: ios=1584/1877, merge=0/0, ticks=480/340, in_queue=820, util=87.58% 00:08:58.684 nvme0n3: ios=1511/1536, merge=0/0, ticks=454/355, in_queue=809, util=88.70% 00:08:58.684 nvme0n4: ios=1531/1536, merge=0/0, ticks=445/337, in_queue=782, util=89.53% 00:08:58.684 13:36:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:58.684 [global] 00:08:58.684 thread=1 00:08:58.684 invalidate=1 00:08:58.684 rw=randwrite 00:08:58.684 time_based=1 00:08:58.684 runtime=1 00:08:58.684 ioengine=libaio 00:08:58.684 direct=1 00:08:58.684 bs=4096 00:08:58.684 iodepth=1 00:08:58.684 norandommap=0 00:08:58.684 numjobs=1 00:08:58.684 00:08:58.684 verify_dump=1 00:08:58.684 verify_backlog=512 00:08:58.684 verify_state_save=0 00:08:58.684 do_verify=1 00:08:58.684 verify=crc32c-intel 00:08:58.684 [job0] 00:08:58.684 filename=/dev/nvme0n1 00:08:58.684 [job1] 00:08:58.684 filename=/dev/nvme0n2 00:08:58.684 [job2] 00:08:58.684 filename=/dev/nvme0n3 00:08:58.684 [job3] 00:08:58.684 filename=/dev/nvme0n4 00:08:58.684 Could not set queue depth (nvme0n1) 00:08:58.684 Could not set queue depth (nvme0n2) 00:08:58.684 Could not set queue depth (nvme0n3) 00:08:58.684 Could not set queue depth (nvme0n4) 00:08:58.684 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.684 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.684 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.684 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.684 fio-3.35 00:08:58.685 Starting 4 threads 00:09:00.101 00:09:00.101 job0: (groupid=0, jobs=1): err= 0: pid=66172: Tue Oct 1 13:36:51 2024 00:09:00.101 read: IOPS=1653, BW=6613KiB/s (6772kB/s)(6620KiB/1001msec) 00:09:00.101 slat (nsec): min=14249, max=45280, avg=16188.10, stdev=2754.95 00:09:00.101 clat (usec): min=197, max=530, avg=278.92, stdev=35.68 00:09:00.101 lat (usec): min=214, max=552, avg=295.11, stdev=36.62 00:09:00.101 clat percentiles (usec): 00:09:00.101 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 260], 00:09:00.101 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:09:00.101 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 351], 00:09:00.101 | 99.00th=[ 437], 99.50th=[ 498], 99.90th=[ 529], 99.95th=[ 529], 00:09:00.101 | 99.99th=[ 529] 00:09:00.101 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:00.101 slat (usec): min=20, max=153, avg=26.70, stdev= 8.84 00:09:00.101 clat (usec): min=104, max=1047, avg=219.12, stdev=65.99 00:09:00.101 lat (usec): min=126, max=1105, avg=245.82, stdev=71.29 00:09:00.101 clat percentiles (usec): 00:09:00.101 | 1.00th=[ 116], 5.00th=[ 127], 10.00th=[ 169], 20.00th=[ 186], 00:09:00.101 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:09:00.101 | 70.00th=[ 219], 80.00th=[ 241], 90.00th=[ 322], 95.00th=[ 347], 00:09:00.101 | 99.00th=[ 392], 99.50th=[ 433], 99.90th=[ 685], 99.95th=[ 988], 00:09:00.101 | 99.99th=[ 1045] 00:09:00.101 bw ( KiB/s): min= 8192, max= 8192, per=20.35%, avg=8192.00, stdev= 0.00, samples=1 00:09:00.101 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:00.101 lat (usec) : 250=48.80%, 500=50.85%, 750=0.30%, 1000=0.03% 00:09:00.101 lat (msec) : 2=0.03% 00:09:00.101 cpu : usr=1.50%, sys=6.60%, ctx=3703, majf=0, minf=17 00:09:00.101 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:00.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.101 issued rwts: total=1655,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.101 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:00.101 job1: (groupid=0, jobs=1): err= 0: pid=66173: Tue Oct 1 13:36:51 2024 00:09:00.101 read: IOPS=2859, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec) 00:09:00.101 slat (nsec): min=13401, max=38415, avg=16193.04, stdev=2972.82 00:09:00.101 clat (usec): min=138, max=215, avg=169.35, stdev=12.07 00:09:00.101 lat (usec): min=153, max=233, avg=185.54, stdev=12.40 00:09:00.101 clat percentiles (usec): 00:09:00.101 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:09:00.101 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:09:00.101 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 190], 00:09:00.101 | 99.00th=[ 202], 99.50th=[ 204], 99.90th=[ 210], 99.95th=[ 215], 00:09:00.101 | 99.99th=[ 217] 00:09:00.101 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:00.101 slat (nsec): min=17321, max=80890, avg=23728.00, stdev=4969.15 00:09:00.101 clat (usec): min=95, max=369, avg=124.77, stdev=11.43 00:09:00.101 lat (usec): min=117, max=390, avg=148.50, stdev=12.25 00:09:00.101 clat percentiles (usec): 00:09:00.101 | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 117], 00:09:00.101 | 30.00th=[ 120], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 127], 00:09:00.101 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 145], 00:09:00.101 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 169], 99.95th=[ 202], 00:09:00.101 | 99.99th=[ 371] 00:09:00.101 bw ( KiB/s): min=12288, max=12288, per=30.52%, avg=12288.00, stdev= 0.00, samples=1 00:09:00.101 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:00.101 lat (usec) : 100=0.12%, 250=99.87%, 500=0.02% 00:09:00.101 cpu : usr=2.70%, sys=9.40%, ctx=5935, majf=0, minf=9 00:09:00.101 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:00.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.101 issued rwts: total=2862,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.101 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:00.101 job2: (groupid=0, jobs=1): err= 0: pid=66175: Tue Oct 1 13:36:51 2024 00:09:00.101 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:00.101 slat (nsec): min=13254, max=40363, avg=16035.96, stdev=2620.71 00:09:00.101 clat (usec): min=149, max=555, avg=188.22, stdev=19.93 00:09:00.101 lat (usec): min=164, max=570, avg=204.26, stdev=20.11 00:09:00.101 clat percentiles (usec): 00:09:00.102 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:09:00.102 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:09:00.102 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 212], 95.00th=[ 225], 00:09:00.102 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 293], 99.95th=[ 310], 00:09:00.102 | 99.99th=[ 553] 00:09:00.102 write: IOPS=2905, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:09:00.102 slat (nsec): min=16801, max=69208, avg=22716.80, stdev=3448.61 00:09:00.102 clat (usec): min=110, max=665, avg=137.77, stdev=19.56 00:09:00.102 lat (usec): min=132, max=685, avg=160.49, stdev=19.81 00:09:00.102 clat percentiles (usec): 00:09:00.102 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 126], 00:09:00.102 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:09:00.102 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 167], 00:09:00.102 | 99.00th=[ 188], 99.50th=[ 200], 99.90th=[ 245], 99.95th=[ 553], 00:09:00.102 | 99.99th=[ 668] 00:09:00.102 bw ( KiB/s): min=12288, max=12288, per=30.52%, avg=12288.00, stdev= 0.00, samples=1 00:09:00.102 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:00.102 lat (usec) : 250=99.38%, 500=0.57%, 750=0.05% 00:09:00.102 cpu : usr=2.50%, sys=8.40%, ctx=5468, majf=0, minf=9 00:09:00.102 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:00.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.102 issued rwts: total=2560,2908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.102 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:00.102 job3: (groupid=0, jobs=1): err= 0: pid=66176: Tue Oct 1 13:36:51 2024 00:09:00.102 read: IOPS=1877, BW=7508KiB/s (7689kB/s)(7516KiB/1001msec) 00:09:00.102 slat (nsec): min=12857, max=29131, avg=14787.41, stdev=2022.60 00:09:00.102 clat (usec): min=177, max=544, avg=282.05, stdev=42.85 00:09:00.102 lat (usec): min=192, max=558, avg=296.84, stdev=43.68 00:09:00.102 clat percentiles (usec): 00:09:00.102 | 1.00th=[ 235], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 260], 00:09:00.102 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:09:00.102 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 363], 00:09:00.102 | 99.00th=[ 490], 99.50th=[ 494], 99.90th=[ 515], 99.95th=[ 545], 00:09:00.102 | 99.99th=[ 545] 00:09:00.102 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:00.102 slat (nsec): min=17999, max=83409, avg=21773.18, stdev=3677.51 00:09:00.102 clat (usec): min=105, max=480, avg=190.40, stdev=36.68 00:09:00.102 lat (usec): min=126, max=515, avg=212.18, stdev=37.39 00:09:00.102 clat percentiles (usec): 00:09:00.102 | 1.00th=[ 116], 5.00th=[ 124], 10.00th=[ 130], 20.00th=[ 172], 00:09:00.102 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:09:00.102 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 231], 00:09:00.102 | 99.00th=[ 293], 99.50th=[ 359], 99.90th=[ 416], 99.95th=[ 474], 00:09:00.102 | 99.99th=[ 482] 00:09:00.102 bw ( KiB/s): min= 8192, max= 8192, per=20.35%, avg=8192.00, stdev= 0.00, samples=1 00:09:00.102 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:00.102 lat (usec) : 250=55.13%, 500=44.74%, 750=0.13% 00:09:00.102 cpu : usr=2.00%, sys=5.50%, ctx=3927, majf=0, minf=14 00:09:00.102 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:00.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.102 issued rwts: total=1879,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.102 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:00.102 00:09:00.102 Run status group 0 (all jobs): 00:09:00.102 READ: bw=34.9MiB/s (36.6MB/s), 6613KiB/s-11.2MiB/s (6772kB/s-11.7MB/s), io=35.0MiB (36.7MB), run=1001-1001msec 00:09:00.102 WRITE: bw=39.3MiB/s (41.2MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.4MiB (41.3MB), run=1001-1001msec 00:09:00.102 00:09:00.102 Disk stats (read/write): 00:09:00.102 nvme0n1: ios=1586/1709, merge=0/0, ticks=459/385, in_queue=844, util=88.28% 00:09:00.102 nvme0n2: ios=2605/2576, merge=0/0, ticks=462/353, in_queue=815, util=88.50% 00:09:00.102 nvme0n3: ios=2184/2560, merge=0/0, ticks=414/379, in_queue=793, util=89.22% 00:09:00.102 nvme0n4: ios=1536/1901, merge=0/0, ticks=431/381, in_queue=812, util=89.77% 00:09:00.102 13:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:00.102 [global] 00:09:00.102 thread=1 00:09:00.102 invalidate=1 00:09:00.102 rw=write 00:09:00.102 time_based=1 00:09:00.102 runtime=1 00:09:00.102 ioengine=libaio 00:09:00.102 direct=1 00:09:00.102 bs=4096 00:09:00.102 iodepth=128 00:09:00.102 norandommap=0 00:09:00.102 numjobs=1 00:09:00.102 00:09:00.102 verify_dump=1 00:09:00.102 verify_backlog=512 00:09:00.102 verify_state_save=0 00:09:00.102 do_verify=1 00:09:00.102 verify=crc32c-intel 00:09:00.102 [job0] 00:09:00.102 filename=/dev/nvme0n1 00:09:00.102 [job1] 00:09:00.102 filename=/dev/nvme0n2 00:09:00.102 [job2] 00:09:00.102 filename=/dev/nvme0n3 00:09:00.102 [job3] 00:09:00.102 filename=/dev/nvme0n4 00:09:00.102 Could not set queue depth (nvme0n1) 00:09:00.102 Could not set queue depth (nvme0n2) 00:09:00.102 Could not set queue depth (nvme0n3) 00:09:00.102 Could not set queue depth (nvme0n4) 00:09:00.102 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:00.102 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:00.102 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:00.102 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:00.102 fio-3.35 00:09:00.102 Starting 4 threads 00:09:01.479 00:09:01.479 job0: (groupid=0, jobs=1): err= 0: pid=66236: Tue Oct 1 13:36:52 2024 00:09:01.479 read: IOPS=5466, BW=21.4MiB/s (22.4MB/s)(21.4MiB/1002msec) 00:09:01.479 slat (usec): min=3, max=3484, avg=87.81, stdev=338.68 00:09:01.479 clat (usec): min=682, max=15255, avg=11453.33, stdev=1182.26 00:09:01.479 lat (usec): min=2364, max=15303, avg=11541.14, stdev=1211.56 00:09:01.479 clat percentiles (usec): 00:09:01.479 | 1.00th=[ 6521], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11207], 00:09:01.479 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:09:01.479 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12780], 95.00th=[13304], 00:09:01.479 | 99.00th=[14091], 99.50th=[14353], 99.90th=[14877], 99.95th=[15008], 00:09:01.479 | 99.99th=[15270] 00:09:01.479 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:01.479 slat (usec): min=9, max=3076, avg=84.10, stdev=343.55 00:09:01.479 clat (usec): min=8314, max=14972, avg=11359.87, stdev=897.50 00:09:01.479 lat (usec): min=8335, max=14995, avg=11443.97, stdev=947.15 00:09:01.479 clat percentiles (usec): 00:09:01.479 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:09:01.479 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:09:01.479 | 70.00th=[11469], 80.00th=[11863], 90.00th=[12256], 95.00th=[13304], 00:09:01.479 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14877], 99.95th=[15008], 00:09:01.479 | 99.99th=[15008] 00:09:01.479 bw ( KiB/s): min=21864, max=23192, per=34.37%, avg=22528.00, stdev=939.04, samples=2 00:09:01.479 iops : min= 5466, max= 5798, avg=5632.00, stdev=234.76, samples=2 00:09:01.479 lat (usec) : 750=0.01% 00:09:01.479 lat (msec) : 4=0.20%, 10=4.50%, 20=95.29% 00:09:01.479 cpu : usr=5.00%, sys=15.88%, ctx=553, majf=0, minf=17 00:09:01.479 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:01.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.479 issued rwts: total=5477,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.479 job1: (groupid=0, jobs=1): err= 0: pid=66237: Tue Oct 1 13:36:52 2024 00:09:01.479 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:09:01.479 slat (usec): min=8, max=8137, avg=83.15, stdev=489.16 00:09:01.479 clat (usec): min=7317, max=20121, avg=11778.08, stdev=1431.87 00:09:01.479 lat (usec): min=7341, max=22465, avg=11861.23, stdev=1448.37 00:09:01.479 clat percentiles (usec): 00:09:01.479 | 1.00th=[ 7767], 5.00th=[ 9372], 10.00th=[10945], 20.00th=[11207], 00:09:01.479 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:09:01.479 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[13304], 00:09:01.479 | 99.00th=[17957], 99.50th=[18220], 99.90th=[20055], 99.95th=[20055], 00:09:01.479 | 99.99th=[20055] 00:09:01.479 write: IOPS=5720, BW=22.3MiB/s (23.4MB/s)(22.4MiB/1002msec); 0 zone resets 00:09:01.479 slat (usec): min=3, max=8478, avg=83.76, stdev=456.16 00:09:01.479 clat (usec): min=1212, max=16123, avg=10590.29, stdev=1137.03 00:09:01.479 lat (usec): min=1237, max=16152, avg=10674.06, stdev=1061.88 00:09:01.479 clat percentiles (usec): 00:09:01.479 | 1.00th=[ 6194], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[10159], 00:09:01.479 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:09:01.479 | 70.00th=[11076], 80.00th=[11076], 90.00th=[11207], 95.00th=[11469], 00:09:01.479 | 99.00th=[14746], 99.50th=[15401], 99.90th=[16057], 99.95th=[16057], 00:09:01.479 | 99.99th=[16188] 00:09:01.479 bw ( KiB/s): min=24056, max=24056, per=36.70%, avg=24056.00, stdev= 0.00, samples=1 00:09:01.479 iops : min= 6014, max= 6014, avg=6014.00, stdev= 0.00, samples=1 00:09:01.479 lat (msec) : 2=0.04%, 10=9.98%, 20=89.92%, 50=0.07% 00:09:01.479 cpu : usr=5.00%, sys=18.08%, ctx=244, majf=0, minf=10 00:09:01.479 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:01.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.479 issued rwts: total=5632,5732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.479 job2: (groupid=0, jobs=1): err= 0: pid=66238: Tue Oct 1 13:36:52 2024 00:09:01.479 read: IOPS=2452, BW=9811KiB/s (10.0MB/s)(9860KiB/1005msec) 00:09:01.479 slat (usec): min=5, max=7532, avg=199.59, stdev=1021.84 00:09:01.479 clat (usec): min=1919, max=28105, avg=25418.81, stdev=3048.72 00:09:01.479 lat (usec): min=5819, max=28129, avg=25618.39, stdev=2878.97 00:09:01.479 clat percentiles (usec): 00:09:01.479 | 1.00th=[ 6456], 5.00th=[20055], 10.00th=[25297], 20.00th=[25560], 00:09:01.479 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:09:01.479 | 70.00th=[26346], 80.00th=[26608], 90.00th=[26870], 95.00th=[27395], 00:09:01.479 | 99.00th=[27919], 99.50th=[27919], 99.90th=[28181], 99.95th=[28181], 00:09:01.479 | 99.99th=[28181] 00:09:01.479 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:09:01.479 slat (usec): min=11, max=6842, avg=190.97, stdev=945.65 00:09:01.479 clat (usec): min=18358, max=28358, avg=24884.32, stdev=1291.20 00:09:01.479 lat (usec): min=18547, max=28383, avg=25075.29, stdev=884.12 00:09:01.479 clat percentiles (usec): 00:09:01.479 | 1.00th=[19268], 5.00th=[23987], 10.00th=[23987], 20.00th=[24249], 00:09:01.479 | 30.00th=[24511], 40.00th=[24773], 50.00th=[24773], 60.00th=[25035], 00:09:01.479 | 70.00th=[25297], 80.00th=[25560], 90.00th=[26084], 95.00th=[27132], 00:09:01.479 | 99.00th=[28181], 99.50th=[28181], 99.90th=[28443], 99.95th=[28443], 00:09:01.479 | 99.99th=[28443] 00:09:01.479 bw ( KiB/s): min= 9464, max=11038, per=15.64%, avg=10251.00, stdev=1112.99, samples=2 00:09:01.479 iops : min= 2366, max= 2759, avg=2562.50, stdev=277.89, samples=2 00:09:01.479 lat (msec) : 2=0.02%, 10=0.64%, 20=3.06%, 50=96.28% 00:09:01.479 cpu : usr=2.29%, sys=7.27%, ctx=161, majf=0, minf=13 00:09:01.479 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:09:01.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.479 issued rwts: total=2465,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.479 job3: (groupid=0, jobs=1): err= 0: pid=66239: Tue Oct 1 13:36:52 2024 00:09:01.479 read: IOPS=2450, BW=9801KiB/s (10.0MB/s)(9860KiB/1006msec) 00:09:01.479 slat (usec): min=8, max=9039, avg=199.86, stdev=1024.13 00:09:01.479 clat (usec): min=2203, max=29037, avg=25447.06, stdev=2887.96 00:09:01.479 lat (usec): min=7007, max=29052, avg=25646.91, stdev=2706.12 00:09:01.479 clat percentiles (usec): 00:09:01.479 | 1.00th=[ 7635], 5.00th=[20317], 10.00th=[25297], 20.00th=[25560], 00:09:01.479 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:09:01.479 | 70.00th=[26084], 80.00th=[26608], 90.00th=[26870], 95.00th=[27919], 00:09:01.479 | 99.00th=[28967], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:09:01.479 | 99.99th=[28967] 00:09:01.479 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:09:01.480 slat (usec): min=12, max=6472, avg=190.91, stdev=941.00 00:09:01.480 clat (usec): min=18163, max=28063, avg=24915.65, stdev=1271.33 00:09:01.480 lat (usec): min=19449, max=28088, avg=25106.56, stdev=855.51 00:09:01.480 clat percentiles (usec): 00:09:01.480 | 1.00th=[19268], 5.00th=[23987], 10.00th=[23987], 20.00th=[24249], 00:09:01.480 | 30.00th=[24511], 40.00th=[24773], 50.00th=[24773], 60.00th=[25035], 00:09:01.480 | 70.00th=[25297], 80.00th=[25560], 90.00th=[26084], 95.00th=[27132], 00:09:01.480 | 99.00th=[27919], 99.50th=[27919], 99.90th=[28181], 99.95th=[28181], 00:09:01.480 | 99.99th=[28181] 00:09:01.480 bw ( KiB/s): min= 9464, max=11016, per=15.62%, avg=10240.00, stdev=1097.43, samples=2 00:09:01.480 iops : min= 2366, max= 2754, avg=2560.00, stdev=274.36, samples=2 00:09:01.480 lat (msec) : 4=0.02%, 10=0.64%, 20=2.63%, 50=96.72% 00:09:01.480 cpu : usr=2.99%, sys=7.56%, ctx=160, majf=0, minf=13 00:09:01.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:09:01.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.480 issued rwts: total=2465,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.480 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.480 00:09:01.480 Run status group 0 (all jobs): 00:09:01.480 READ: bw=62.3MiB/s (65.3MB/s), 9801KiB/s-22.0MiB/s (10.0MB/s-23.0MB/s), io=62.7MiB (65.7MB), run=1002-1006msec 00:09:01.480 WRITE: bw=64.0MiB/s (67.1MB/s), 9.94MiB/s-22.3MiB/s (10.4MB/s-23.4MB/s), io=64.4MiB (67.5MB), run=1002-1006msec 00:09:01.480 00:09:01.480 Disk stats (read/write): 00:09:01.480 nvme0n1: ios=4658/4990, merge=0/0, ticks=16836/16008, in_queue=32844, util=88.28% 00:09:01.480 nvme0n2: ios=4666/5120, merge=0/0, ticks=50542/49480, in_queue=100022, util=88.59% 00:09:01.480 nvme0n3: ios=2083/2272, merge=0/0, ticks=11688/11598, in_queue=23286, util=89.50% 00:09:01.480 nvme0n4: ios=2048/2272, merge=0/0, ticks=12096/12596, in_queue=24692, util=89.44% 00:09:01.480 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:01.480 [global] 00:09:01.480 thread=1 00:09:01.480 invalidate=1 00:09:01.480 rw=randwrite 00:09:01.480 time_based=1 00:09:01.480 runtime=1 00:09:01.480 ioengine=libaio 00:09:01.480 direct=1 00:09:01.480 bs=4096 00:09:01.480 iodepth=128 00:09:01.480 norandommap=0 00:09:01.480 numjobs=1 00:09:01.480 00:09:01.480 verify_dump=1 00:09:01.480 verify_backlog=512 00:09:01.480 verify_state_save=0 00:09:01.480 do_verify=1 00:09:01.480 verify=crc32c-intel 00:09:01.480 [job0] 00:09:01.480 filename=/dev/nvme0n1 00:09:01.480 [job1] 00:09:01.480 filename=/dev/nvme0n2 00:09:01.480 [job2] 00:09:01.480 filename=/dev/nvme0n3 00:09:01.480 [job3] 00:09:01.480 filename=/dev/nvme0n4 00:09:01.480 Could not set queue depth (nvme0n1) 00:09:01.480 Could not set queue depth (nvme0n2) 00:09:01.480 Could not set queue depth (nvme0n3) 00:09:01.480 Could not set queue depth (nvme0n4) 00:09:01.480 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.480 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.480 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.480 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.480 fio-3.35 00:09:01.480 Starting 4 threads 00:09:02.859 00:09:02.859 job0: (groupid=0, jobs=1): err= 0: pid=66292: Tue Oct 1 13:36:54 2024 00:09:02.859 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:09:02.859 slat (usec): min=4, max=14873, avg=140.75, stdev=882.25 00:09:02.859 clat (usec): min=5257, max=63272, avg=18837.19, stdev=6542.03 00:09:02.859 lat (usec): min=5272, max=63282, avg=18977.94, stdev=6597.71 00:09:02.859 clat percentiles (usec): 00:09:02.859 | 1.00th=[10421], 5.00th=[12780], 10.00th=[13698], 20.00th=[13960], 00:09:02.859 | 30.00th=[14484], 40.00th=[16319], 50.00th=[19006], 60.00th=[19792], 00:09:02.859 | 70.00th=[20317], 80.00th=[21103], 90.00th=[25560], 95.00th=[28443], 00:09:02.859 | 99.00th=[52691], 99.50th=[58459], 99.90th=[63177], 99.95th=[63177], 00:09:02.859 | 99.99th=[63177] 00:09:02.859 write: IOPS=4009, BW=15.7MiB/s (16.4MB/s)(15.7MiB/1005msec); 0 zone resets 00:09:02.859 slat (usec): min=5, max=14425, avg=116.11, stdev=669.56 00:09:02.859 clat (usec): min=1218, max=63238, avg=14845.75, stdev=7142.02 00:09:02.859 lat (usec): min=3203, max=63244, avg=14961.85, stdev=7157.74 00:09:02.859 clat percentiles (usec): 00:09:02.859 | 1.00th=[ 5014], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10159], 00:09:02.859 | 30.00th=[10552], 40.00th=[10814], 50.00th=[12387], 60.00th=[14746], 00:09:02.859 | 70.00th=[17171], 80.00th=[18482], 90.00th=[21890], 95.00th=[32900], 00:09:02.859 | 99.00th=[41681], 99.50th=[45351], 99.90th=[46924], 99.95th=[46924], 00:09:02.859 | 99.99th=[63177] 00:09:02.859 bw ( KiB/s): min=14832, max=16416, per=29.72%, avg=15624.00, stdev=1120.06, samples=2 00:09:02.859 iops : min= 3708, max= 4104, avg=3906.00, stdev=280.01, samples=2 00:09:02.859 lat (msec) : 2=0.01%, 4=0.18%, 10=7.35%, 20=68.61%, 50=23.22% 00:09:02.859 lat (msec) : 100=0.62% 00:09:02.859 cpu : usr=3.88%, sys=10.26%, ctx=213, majf=0, minf=9 00:09:02.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:02.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.859 issued rwts: total=3584,4030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.859 job1: (groupid=0, jobs=1): err= 0: pid=66293: Tue Oct 1 13:36:54 2024 00:09:02.859 read: IOPS=1594, BW=6380KiB/s (6533kB/s)(6488KiB/1017msec) 00:09:02.859 slat (usec): min=4, max=23183, avg=280.40, stdev=1548.99 00:09:02.859 clat (usec): min=4700, max=81572, avg=34088.63, stdev=14072.70 00:09:02.859 lat (usec): min=12081, max=81582, avg=34369.03, stdev=14168.37 00:09:02.859 clat percentiles (usec): 00:09:02.859 | 1.00th=[12387], 5.00th=[18744], 10.00th=[19006], 20.00th=[19268], 00:09:02.859 | 30.00th=[20055], 40.00th=[29754], 50.00th=[34866], 60.00th=[37487], 00:09:02.859 | 70.00th=[41681], 80.00th=[44303], 90.00th=[52691], 95.00th=[59507], 00:09:02.859 | 99.00th=[79168], 99.50th=[80217], 99.90th=[81265], 99.95th=[81265], 00:09:02.859 | 99.99th=[81265] 00:09:02.859 write: IOPS=2013, BW=8055KiB/s (8248kB/s)(8192KiB/1017msec); 0 zone resets 00:09:02.859 slat (usec): min=7, max=17118, avg=263.45, stdev=1238.95 00:09:02.859 clat (msec): min=9, max=110, avg=35.89, stdev=24.40 00:09:02.859 lat (msec): min=12, max=112, avg=36.15, stdev=24.55 00:09:02.859 clat percentiles (msec): 00:09:02.859 | 1.00th=[ 14], 5.00th=[ 17], 10.00th=[ 18], 20.00th=[ 18], 00:09:02.859 | 30.00th=[ 19], 40.00th=[ 19], 50.00th=[ 24], 60.00th=[ 31], 00:09:02.859 | 70.00th=[ 45], 80.00th=[ 56], 90.00th=[ 78], 95.00th=[ 95], 00:09:02.859 | 99.00th=[ 97], 99.50th=[ 100], 99.90th=[ 104], 99.95th=[ 108], 00:09:02.859 | 99.99th=[ 110] 00:09:02.859 bw ( KiB/s): min= 5808, max=10260, per=15.28%, avg=8034.00, stdev=3148.04, samples=2 00:09:02.859 iops : min= 1452, max= 2565, avg=2008.50, stdev=787.01, samples=2 00:09:02.859 lat (msec) : 10=0.08%, 20=38.83%, 50=41.31%, 100=19.65%, 250=0.14% 00:09:02.859 cpu : usr=2.26%, sys=5.71%, ctx=359, majf=0, minf=17 00:09:02.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:09:02.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.859 issued rwts: total=1622,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.859 job2: (groupid=0, jobs=1): err= 0: pid=66294: Tue Oct 1 13:36:54 2024 00:09:02.859 read: IOPS=1508, BW=6035KiB/s (6180kB/s)(6144KiB/1018msec) 00:09:02.859 slat (usec): min=8, max=20717, avg=297.62, stdev=1402.88 00:09:02.859 clat (usec): min=18863, max=78994, avg=39366.09, stdev=15115.70 00:09:02.859 lat (usec): min=18880, max=79028, avg=39663.71, stdev=15194.59 00:09:02.859 clat percentiles (usec): 00:09:02.859 | 1.00th=[19006], 5.00th=[19268], 10.00th=[21890], 20.00th=[22676], 00:09:02.859 | 30.00th=[26870], 40.00th=[35390], 50.00th=[40109], 60.00th=[42206], 00:09:02.859 | 70.00th=[44827], 80.00th=[52691], 90.00th=[61604], 95.00th=[67634], 00:09:02.859 | 99.00th=[74974], 99.50th=[76022], 99.90th=[76022], 99.95th=[79168], 00:09:02.859 | 99.99th=[79168] 00:09:02.859 write: IOPS=1812, BW=7250KiB/s (7423kB/s)(7380KiB/1018msec); 0 zone resets 00:09:02.859 slat (usec): min=7, max=29135, avg=288.09, stdev=1489.71 00:09:02.859 clat (msec): min=12, max=117, avg=37.13, stdev=23.27 00:09:02.859 lat (msec): min=15, max=117, avg=37.41, stdev=23.43 00:09:02.859 clat percentiles (msec): 00:09:02.859 | 1.00th=[ 16], 5.00th=[ 16], 10.00th=[ 21], 20.00th=[ 22], 00:09:02.859 | 30.00th=[ 22], 40.00th=[ 25], 50.00th=[ 28], 60.00th=[ 30], 00:09:02.859 | 70.00th=[ 36], 80.00th=[ 56], 90.00th=[ 79], 95.00th=[ 95], 00:09:02.859 | 99.00th=[ 97], 99.50th=[ 97], 99.90th=[ 111], 99.95th=[ 117], 00:09:02.859 | 99.99th=[ 117] 00:09:02.859 bw ( KiB/s): min= 5544, max= 8192, per=13.06%, avg=6868.00, stdev=1872.42, samples=2 00:09:02.859 iops : min= 1386, max= 2048, avg=1717.00, stdev=468.10, samples=2 00:09:02.859 lat (msec) : 20=8.37%, 50=69.98%, 100=21.59%, 250=0.06% 00:09:02.859 cpu : usr=1.87%, sys=5.21%, ctx=402, majf=0, minf=13 00:09:02.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:09:02.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.860 issued rwts: total=1536,1845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.860 job3: (groupid=0, jobs=1): err= 0: pid=66295: Tue Oct 1 13:36:54 2024 00:09:02.860 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:09:02.860 slat (usec): min=8, max=4627, avg=92.39, stdev=402.52 00:09:02.860 clat (usec): min=8596, max=31412, avg=12281.68, stdev=3736.84 00:09:02.860 lat (usec): min=8866, max=31815, avg=12374.07, stdev=3769.86 00:09:02.860 clat percentiles (usec): 00:09:02.860 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10683], 00:09:02.860 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11076], 60.00th=[11338], 00:09:02.860 | 70.00th=[11469], 80.00th=[11731], 90.00th=[15008], 95.00th=[22676], 00:09:02.860 | 99.00th=[26608], 99.50th=[26870], 99.90th=[28181], 99.95th=[28181], 00:09:02.860 | 99.99th=[31327] 00:09:02.860 write: IOPS=5434, BW=21.2MiB/s (22.3MB/s)(21.3MiB/1004msec); 0 zone resets 00:09:02.860 slat (usec): min=6, max=8360, avg=89.17, stdev=505.78 00:09:02.860 clat (usec): min=269, max=29401, avg=11560.80, stdev=3846.02 00:09:02.860 lat (usec): min=3977, max=31539, avg=11649.97, stdev=3902.63 00:09:02.860 clat percentiles (usec): 00:09:02.860 | 1.00th=[ 5080], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[ 9765], 00:09:02.860 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:09:02.860 | 70.00th=[10552], 80.00th=[10945], 90.00th=[20579], 95.00th=[21890], 00:09:02.860 | 99.00th=[21890], 99.50th=[22152], 99.90th=[26346], 99.95th=[27132], 00:09:02.860 | 99.99th=[29492] 00:09:02.860 bw ( KiB/s): min=18936, max=23688, per=40.54%, avg=21312.00, stdev=3360.17, samples=2 00:09:02.860 iops : min= 4734, max= 5922, avg=5328.00, stdev=840.04, samples=2 00:09:02.860 lat (usec) : 500=0.01% 00:09:02.860 lat (msec) : 4=0.01%, 10=21.30%, 20=68.48%, 50=10.20% 00:09:02.860 cpu : usr=4.89%, sys=14.56%, ctx=415, majf=0, minf=9 00:09:02.860 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:02.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.860 issued rwts: total=5120,5456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.860 00:09:02.860 Run status group 0 (all jobs): 00:09:02.860 READ: bw=45.5MiB/s (47.7MB/s), 6035KiB/s-19.9MiB/s (6180kB/s-20.9MB/s), io=46.3MiB (48.6MB), run=1004-1018msec 00:09:02.860 WRITE: bw=51.3MiB/s (53.8MB/s), 7250KiB/s-21.2MiB/s (7423kB/s-22.3MB/s), io=52.3MiB (54.8MB), run=1004-1018msec 00:09:02.860 00:09:02.860 Disk stats (read/write): 00:09:02.860 nvme0n1: ios=2988/3072, merge=0/0, ticks=55931/47224, in_queue=103155, util=89.18% 00:09:02.860 nvme0n2: ios=1570/1881, merge=0/0, ticks=30646/35243, in_queue=65889, util=87.74% 00:09:02.860 nvme0n3: ios=1542/1604, merge=0/0, ticks=28735/22566, in_queue=51301, util=88.00% 00:09:02.860 nvme0n4: ios=4159/4608, merge=0/0, ticks=24937/22904, in_queue=47841, util=88.33% 00:09:02.860 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:02.860 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66309 00:09:02.860 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:02.860 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:02.860 [global] 00:09:02.860 thread=1 00:09:02.860 invalidate=1 00:09:02.860 rw=read 00:09:02.860 time_based=1 00:09:02.860 runtime=10 00:09:02.860 ioengine=libaio 00:09:02.860 direct=1 00:09:02.860 bs=4096 00:09:02.860 iodepth=1 00:09:02.860 norandommap=1 00:09:02.860 numjobs=1 00:09:02.860 00:09:02.860 [job0] 00:09:02.860 filename=/dev/nvme0n1 00:09:02.860 [job1] 00:09:02.860 filename=/dev/nvme0n2 00:09:02.860 [job2] 00:09:02.860 filename=/dev/nvme0n3 00:09:02.860 [job3] 00:09:02.860 filename=/dev/nvme0n4 00:09:02.860 Could not set queue depth (nvme0n1) 00:09:02.860 Could not set queue depth (nvme0n2) 00:09:02.860 Could not set queue depth (nvme0n3) 00:09:02.860 Could not set queue depth (nvme0n4) 00:09:02.860 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.860 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.860 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.860 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.860 fio-3.35 00:09:02.860 Starting 4 threads 00:09:06.147 13:36:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:06.147 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=61399040, buflen=4096 00:09:06.147 fio: pid=66358, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:06.147 13:36:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:06.147 fio: pid=66357, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:06.147 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44511232, buflen=4096 00:09:06.147 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.147 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:06.407 fio: pid=66355, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:06.407 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=55832576, buflen=4096 00:09:06.667 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.667 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:06.667 fio: pid=66356, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:06.667 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=6139904, buflen=4096 00:09:06.926 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.926 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:06.926 00:09:06.926 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66355: Tue Oct 1 13:36:58 2024 00:09:06.926 read: IOPS=3877, BW=15.1MiB/s (15.9MB/s)(53.2MiB/3516msec) 00:09:06.926 slat (usec): min=11, max=12495, avg=19.81, stdev=166.76 00:09:06.926 clat (usec): min=130, max=4306, avg=236.34, stdev=82.97 00:09:06.926 lat (usec): min=148, max=12682, avg=256.15, stdev=186.26 00:09:06.926 clat percentiles (usec): 00:09:06.926 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 165], 00:09:06.926 | 30.00th=[ 178], 40.00th=[ 225], 50.00th=[ 251], 60.00th=[ 260], 00:09:06.926 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 326], 00:09:06.926 | 99.00th=[ 441], 99.50th=[ 537], 99.90th=[ 832], 99.95th=[ 1106], 00:09:06.926 | 99.99th=[ 2409] 00:09:06.926 bw ( KiB/s): min=13048, max=19824, per=24.17%, avg=14649.33, stdev=2555.63, samples=6 00:09:06.926 iops : min= 3262, max= 4956, avg=3662.33, stdev=638.91, samples=6 00:09:06.926 lat (usec) : 250=49.96%, 500=49.41%, 750=0.49%, 1000=0.07% 00:09:06.926 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01% 00:09:06.926 cpu : usr=1.71%, sys=5.80%, ctx=13636, majf=0, minf=1 00:09:06.926 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.926 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.926 issued rwts: total=13632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.926 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.926 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66356: Tue Oct 1 13:36:58 2024 00:09:06.926 read: IOPS=4723, BW=18.5MiB/s (19.3MB/s)(69.9MiB/3786msec) 00:09:06.926 slat (usec): min=8, max=15760, avg=20.14, stdev=229.15 00:09:06.926 clat (usec): min=131, max=6297, avg=189.95, stdev=119.17 00:09:06.926 lat (usec): min=144, max=16165, avg=210.09, stdev=259.98 00:09:06.926 clat percentiles (usec): 00:09:06.927 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:09:06.927 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:09:06.927 | 70.00th=[ 186], 80.00th=[ 204], 90.00th=[ 253], 95.00th=[ 269], 00:09:06.927 | 99.00th=[ 334], 99.50th=[ 404], 99.90th=[ 1156], 99.95th=[ 3392], 00:09:06.927 | 99.99th=[ 5211] 00:09:06.927 bw ( KiB/s): min=12935, max=21496, per=31.15%, avg=18883.29, stdev=3507.93, samples=7 00:09:06.927 iops : min= 3233, max= 5374, avg=4720.71, stdev=877.19, samples=7 00:09:06.927 lat (usec) : 250=88.64%, 500=11.12%, 750=0.13%, 1000=0.01% 00:09:06.927 lat (msec) : 2=0.03%, 4=0.04%, 10=0.03% 00:09:06.927 cpu : usr=1.35%, sys=6.84%, ctx=17892, majf=0, minf=1 00:09:06.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.927 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.927 issued rwts: total=17884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.927 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66357: Tue Oct 1 13:36:58 2024 00:09:06.927 read: IOPS=3336, BW=13.0MiB/s (13.7MB/s)(42.4MiB/3257msec) 00:09:06.927 slat (usec): min=8, max=14947, avg=19.11, stdev=181.84 00:09:06.927 clat (usec): min=148, max=7619, avg=278.72, stdev=114.72 00:09:06.927 lat (usec): min=163, max=15248, avg=297.82, stdev=215.54 00:09:06.927 clat percentiles (usec): 00:09:06.927 | 1.00th=[ 176], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 251], 00:09:06.927 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:09:06.927 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 396], 00:09:06.927 | 99.00th=[ 474], 99.50th=[ 553], 99.90th=[ 857], 99.95th=[ 2737], 00:09:06.927 | 99.99th=[ 4080] 00:09:06.927 bw ( KiB/s): min=11904, max=14744, per=22.25%, avg=13488.00, stdev=993.95, samples=6 00:09:06.927 iops : min= 2976, max= 3686, avg=3372.00, stdev=248.49, samples=6 00:09:06.927 lat (usec) : 250=19.20%, 500=79.93%, 750=0.71%, 1000=0.07% 00:09:06.927 lat (msec) : 2=0.02%, 4=0.04%, 10=0.02% 00:09:06.927 cpu : usr=1.26%, sys=5.04%, ctx=10874, majf=0, minf=1 00:09:06.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.927 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.927 issued rwts: total=10868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.927 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66358: Tue Oct 1 13:36:58 2024 00:09:06.927 read: IOPS=5018, BW=19.6MiB/s (20.6MB/s)(58.6MiB/2987msec) 00:09:06.927 slat (nsec): min=12643, max=79937, avg=15199.18, stdev=3112.25 00:09:06.927 clat (usec): min=148, max=2148, avg=182.40, stdev=23.57 00:09:06.927 lat (usec): min=163, max=2175, avg=197.59, stdev=23.92 00:09:06.927 clat percentiles (usec): 00:09:06.927 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:09:06.927 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:09:06.927 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 206], 00:09:06.927 | 99.00th=[ 221], 99.50th=[ 229], 99.90th=[ 265], 99.95th=[ 297], 00:09:06.927 | 99.99th=[ 1237] 00:09:06.927 bw ( KiB/s): min=19944, max=20280, per=33.24%, avg=20147.20, stdev=148.12, samples=5 00:09:06.927 iops : min= 4986, max= 5070, avg=5036.80, stdev=37.03, samples=5 00:09:06.927 lat (usec) : 250=99.87%, 500=0.11%, 750=0.01% 00:09:06.927 lat (msec) : 2=0.01%, 4=0.01% 00:09:06.927 cpu : usr=1.71%, sys=6.97%, ctx=14992, majf=0, minf=2 00:09:06.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.927 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.927 issued rwts: total=14991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.927 00:09:06.927 Run status group 0 (all jobs): 00:09:06.927 READ: bw=59.2MiB/s (62.1MB/s), 13.0MiB/s-19.6MiB/s (13.7MB/s-20.6MB/s), io=224MiB (235MB), run=2987-3786msec 00:09:06.927 00:09:06.927 Disk stats (read/write): 00:09:06.927 nvme0n1: ios=12757/0, merge=0/0, ticks=3100/0, in_queue=3100, util=95.08% 00:09:06.927 nvme0n2: ios=16856/0, merge=0/0, ticks=3245/0, in_queue=3245, util=94.48% 00:09:06.927 nvme0n3: ios=10423/0, merge=0/0, ticks=2879/0, in_queue=2879, util=95.76% 00:09:06.927 nvme0n4: ios=14361/0, merge=0/0, ticks=2684/0, in_queue=2684, util=96.75% 00:09:07.185 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.185 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:07.443 13:36:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.443 13:36:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:07.700 13:36:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.700 13:36:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:07.959 13:36:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.959 13:36:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:08.217 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:08.217 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66309 00:09:08.217 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:08.217 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.217 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.217 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:08.217 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:08.217 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.497 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.497 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:08.497 nvmf hotplug test: fio failed as expected 00:09:08.497 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:08.497 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:08.497 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:08.497 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.755 rmmod nvme_tcp 00:09:08.755 rmmod nvme_fabrics 00:09:08.755 rmmod nvme_keyring 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 65928 ']' 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 65928 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 65928 ']' 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 65928 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65928 00:09:08.755 killing process with pid 65928 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65928' 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 65928 00:09:08.755 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 65928 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:09.011 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:09.270 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.270 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.270 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:09.270 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.270 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.270 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.270 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:09.270 ************************************ 00:09:09.270 END TEST nvmf_fio_target 00:09:09.270 ************************************ 00:09:09.270 00:09:09.270 real 0m20.023s 00:09:09.270 user 1m15.059s 00:09:09.270 sys 0m10.482s 00:09:09.270 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.270 13:37:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:09.270 13:37:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:09.270 13:37:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:09.270 13:37:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.270 13:37:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.270 ************************************ 00:09:09.270 START TEST nvmf_bdevio 00:09:09.270 ************************************ 00:09:09.270 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:09.529 * Looking for test storage... 00:09:09.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.529 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:09.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.530 --rc genhtml_branch_coverage=1 00:09:09.530 --rc genhtml_function_coverage=1 00:09:09.530 --rc genhtml_legend=1 00:09:09.530 --rc geninfo_all_blocks=1 00:09:09.530 --rc geninfo_unexecuted_blocks=1 00:09:09.530 00:09:09.530 ' 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:09.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.530 --rc genhtml_branch_coverage=1 00:09:09.530 --rc genhtml_function_coverage=1 00:09:09.530 --rc genhtml_legend=1 00:09:09.530 --rc geninfo_all_blocks=1 00:09:09.530 --rc geninfo_unexecuted_blocks=1 00:09:09.530 00:09:09.530 ' 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:09.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.530 --rc genhtml_branch_coverage=1 00:09:09.530 --rc genhtml_function_coverage=1 00:09:09.530 --rc genhtml_legend=1 00:09:09.530 --rc geninfo_all_blocks=1 00:09:09.530 --rc geninfo_unexecuted_blocks=1 00:09:09.530 00:09:09.530 ' 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:09.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.530 --rc genhtml_branch_coverage=1 00:09:09.530 --rc genhtml_function_coverage=1 00:09:09.530 --rc genhtml_legend=1 00:09:09.530 --rc geninfo_all_blocks=1 00:09:09.530 --rc geninfo_unexecuted_blocks=1 00:09:09.530 00:09:09.530 ' 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:09.530 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.531 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:09.531 Cannot find device "nvmf_init_br" 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:09.531 Cannot find device "nvmf_init_br2" 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:09.531 Cannot find device "nvmf_tgt_br" 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.531 Cannot find device "nvmf_tgt_br2" 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:09.531 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:09.789 Cannot find device "nvmf_init_br" 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:09.789 Cannot find device "nvmf_init_br2" 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:09.789 Cannot find device "nvmf_tgt_br" 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:09.789 Cannot find device "nvmf_tgt_br2" 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:09.789 Cannot find device "nvmf_br" 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:09.789 Cannot find device "nvmf_init_if" 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:09.789 Cannot find device "nvmf_init_if2" 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:09.789 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:10.047 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:10.047 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:09:10.047 00:09:10.047 --- 10.0.0.3 ping statistics --- 00:09:10.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.047 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:10.047 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:10.047 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:09:10.047 00:09:10.047 --- 10.0.0.4 ping statistics --- 00:09:10.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.047 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:10.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:10.047 00:09:10.047 --- 10.0.0.1 ping statistics --- 00:09:10.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.047 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:10.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:09:10.047 00:09:10.047 --- 10.0.0.2 ping statistics --- 00:09:10.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.047 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.047 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=66684 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 66684 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 66684 ']' 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.048 13:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.048 [2024-10-01 13:37:01.813484] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:09:10.048 [2024-10-01 13:37:01.814058] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.305 [2024-10-01 13:37:01.955579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.305 [2024-10-01 13:37:02.029275] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.305 [2024-10-01 13:37:02.029335] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.305 [2024-10-01 13:37:02.029349] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.305 [2024-10-01 13:37:02.029358] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.305 [2024-10-01 13:37:02.029367] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.305 [2024-10-01 13:37:02.029498] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:10.305 [2024-10-01 13:37:02.029983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:10.305 [2024-10-01 13:37:02.030238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:10.305 [2024-10-01 13:37:02.030338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.305 [2024-10-01 13:37:02.063402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.305 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.305 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:10.305 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:10.305 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:10.305 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.563 [2024-10-01 13:37:02.176906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.563 Malloc0 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.563 [2024-10-01 13:37:02.228657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:10.563 { 00:09:10.563 "params": { 00:09:10.563 "name": "Nvme$subsystem", 00:09:10.563 "trtype": "$TEST_TRANSPORT", 00:09:10.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.563 "adrfam": "ipv4", 00:09:10.563 "trsvcid": "$NVMF_PORT", 00:09:10.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.563 "hdgst": ${hdgst:-false}, 00:09:10.563 "ddgst": ${ddgst:-false} 00:09:10.563 }, 00:09:10.563 "method": "bdev_nvme_attach_controller" 00:09:10.563 } 00:09:10.563 EOF 00:09:10.563 )") 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:09:10.563 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:10.563 "params": { 00:09:10.563 "name": "Nvme1", 00:09:10.563 "trtype": "tcp", 00:09:10.563 "traddr": "10.0.0.3", 00:09:10.563 "adrfam": "ipv4", 00:09:10.563 "trsvcid": "4420", 00:09:10.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.563 "hdgst": false, 00:09:10.563 "ddgst": false 00:09:10.563 }, 00:09:10.563 "method": "bdev_nvme_attach_controller" 00:09:10.563 }' 00:09:10.563 [2024-10-01 13:37:02.284428] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:09:10.563 [2024-10-01 13:37:02.284727] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66707 ] 00:09:10.822 [2024-10-01 13:37:02.425390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:10.822 [2024-10-01 13:37:02.496466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.822 [2024-10-01 13:37:02.496604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.822 [2024-10-01 13:37:02.496608] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.822 [2024-10-01 13:37:02.539846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.822 I/O targets: 00:09:10.822 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:10.822 00:09:10.822 00:09:10.822 CUnit - A unit testing framework for C - Version 2.1-3 00:09:10.822 http://cunit.sourceforge.net/ 00:09:10.822 00:09:10.822 00:09:10.822 Suite: bdevio tests on: Nvme1n1 00:09:10.822 Test: blockdev write read block ...passed 00:09:10.822 Test: blockdev write zeroes read block ...passed 00:09:10.822 Test: blockdev write zeroes read no split ...passed 00:09:10.822 Test: blockdev write zeroes read split ...passed 00:09:10.822 Test: blockdev write zeroes read split partial ...passed 00:09:10.822 Test: blockdev reset ...[2024-10-01 13:37:02.673835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:10.822 [2024-10-01 13:37:02.674200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159e040 (9): Bad file descriptor 00:09:11.081 [2024-10-01 13:37:02.690975] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:11.081 passed 00:09:11.081 Test: blockdev write read 8 blocks ...passed 00:09:11.081 Test: blockdev write read size > 128k ...passed 00:09:11.081 Test: blockdev write read invalid size ...passed 00:09:11.081 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:11.081 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:11.081 Test: blockdev write read max offset ...passed 00:09:11.081 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:11.081 Test: blockdev writev readv 8 blocks ...passed 00:09:11.081 Test: blockdev writev readv 30 x 1block ...passed 00:09:11.082 Test: blockdev writev readv block ...passed 00:09:11.082 Test: blockdev writev readv size > 128k ...passed 00:09:11.082 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:11.082 Test: blockdev comparev and writev ...[2024-10-01 13:37:02.699989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.082 [2024-10-01 13:37:02.700158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:11.082 [2024-10-01 13:37:02.700188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.082 [2024-10-01 13:37:02.700201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:11.082 [2024-10-01 13:37:02.700497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.082 [2024-10-01 13:37:02.700524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:11.082 [2024-10-01 13:37:02.700556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.082 [2024-10-01 13:37:02.700570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:11.082 [2024-10-01 13:37:02.700852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.082 [2024-10-01 13:37:02.700875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:11.082 [2024-10-01 13:37:02.700892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.082 [2024-10-01 13:37:02.700902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:11.082 [2024-10-01 13:37:02.701224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.082 [2024-10-01 13:37:02.701255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:11.082 [2024-10-01 13:37:02.701273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.082 [2024-10-01 13:37:02.701283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:11.082 passed 00:09:11.082 Test: blockdev nvme passthru rw ...passed 00:09:11.082 Test: blockdev nvme passthru vendor specific ...[2024-10-01 13:37:02.702142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:11.082 [2024-10-01 13:37:02.702169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:11.082 [2024-10-01 13:37:02.702278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:11.082 [2024-10-01 13:37:02.702294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:11.082 [2024-10-01 13:37:02.702405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:11.082 [2024-10-01 13:37:02.702426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:11.082 [2024-10-01 13:37:02.702552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:11.082 [2024-10-01 13:37:02.702570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:11.082 passed 00:09:11.082 Test: blockdev nvme admin passthru ...passed 00:09:11.082 Test: blockdev copy ...passed 00:09:11.082 00:09:11.082 Run Summary: Type Total Ran Passed Failed Inactive 00:09:11.082 suites 1 1 n/a 0 0 00:09:11.082 tests 23 23 23 0 0 00:09:11.082 asserts 152 152 152 0 n/a 00:09:11.082 00:09:11.082 Elapsed time = 0.146 seconds 00:09:11.341 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.341 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.341 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:11.341 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.341 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:11.341 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:11.341 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:11.341 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:11.341 rmmod nvme_tcp 00:09:11.341 rmmod nvme_fabrics 00:09:11.341 rmmod nvme_keyring 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 66684 ']' 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 66684 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 66684 ']' 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 66684 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66684 00:09:11.341 killing process with pid 66684 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66684' 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 66684 00:09:11.341 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 66684 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:11.600 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:11.860 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:11.860 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:11.860 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.860 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.860 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:11.860 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.860 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.860 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.860 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:11.860 00:09:11.860 real 0m2.534s 00:09:11.860 user 0m6.409s 00:09:11.860 sys 0m0.826s 00:09:11.860 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.860 13:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:11.860 ************************************ 00:09:11.860 END TEST nvmf_bdevio 00:09:11.860 ************************************ 00:09:11.860 13:37:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:11.860 ************************************ 00:09:11.860 END TEST nvmf_target_core 00:09:11.860 ************************************ 00:09:11.860 00:09:11.860 real 2m33.517s 00:09:11.860 user 6m38.262s 00:09:11.860 sys 0m52.208s 00:09:11.860 13:37:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.860 13:37:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.860 13:37:03 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:11.860 13:37:03 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:11.860 13:37:03 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.860 13:37:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:11.860 ************************************ 00:09:11.860 START TEST nvmf_target_extra 00:09:11.860 ************************************ 00:09:11.860 13:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:12.119 * Looking for test storage... 00:09:12.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:12.119 13:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:12.119 13:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:09:12.119 13:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:12.119 13:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:12.119 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.119 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:12.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.120 --rc genhtml_branch_coverage=1 00:09:12.120 --rc genhtml_function_coverage=1 00:09:12.120 --rc genhtml_legend=1 00:09:12.120 --rc geninfo_all_blocks=1 00:09:12.120 --rc geninfo_unexecuted_blocks=1 00:09:12.120 00:09:12.120 ' 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:12.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.120 --rc genhtml_branch_coverage=1 00:09:12.120 --rc genhtml_function_coverage=1 00:09:12.120 --rc genhtml_legend=1 00:09:12.120 --rc geninfo_all_blocks=1 00:09:12.120 --rc geninfo_unexecuted_blocks=1 00:09:12.120 00:09:12.120 ' 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:12.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.120 --rc genhtml_branch_coverage=1 00:09:12.120 --rc genhtml_function_coverage=1 00:09:12.120 --rc genhtml_legend=1 00:09:12.120 --rc geninfo_all_blocks=1 00:09:12.120 --rc geninfo_unexecuted_blocks=1 00:09:12.120 00:09:12.120 ' 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:12.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.120 --rc genhtml_branch_coverage=1 00:09:12.120 --rc genhtml_function_coverage=1 00:09:12.120 --rc genhtml_legend=1 00:09:12.120 --rc geninfo_all_blocks=1 00:09:12.120 --rc geninfo_unexecuted_blocks=1 00:09:12.120 00:09:12.120 ' 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.120 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.121 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.121 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.121 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.121 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.121 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:12.121 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:12.121 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:12.121 13:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:12.121 13:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:12.121 13:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.121 13:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:12.121 ************************************ 00:09:12.121 START TEST nvmf_auth_target 00:09:12.121 ************************************ 00:09:12.121 13:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:12.121 * Looking for test storage... 00:09:12.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:12.121 13:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:12.121 13:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:09:12.121 13:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:12.380 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:12.380 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.380 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.380 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.380 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.380 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.380 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:12.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.381 --rc genhtml_branch_coverage=1 00:09:12.381 --rc genhtml_function_coverage=1 00:09:12.381 --rc genhtml_legend=1 00:09:12.381 --rc geninfo_all_blocks=1 00:09:12.381 --rc geninfo_unexecuted_blocks=1 00:09:12.381 00:09:12.381 ' 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:12.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.381 --rc genhtml_branch_coverage=1 00:09:12.381 --rc genhtml_function_coverage=1 00:09:12.381 --rc genhtml_legend=1 00:09:12.381 --rc geninfo_all_blocks=1 00:09:12.381 --rc geninfo_unexecuted_blocks=1 00:09:12.381 00:09:12.381 ' 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:12.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.381 --rc genhtml_branch_coverage=1 00:09:12.381 --rc genhtml_function_coverage=1 00:09:12.381 --rc genhtml_legend=1 00:09:12.381 --rc geninfo_all_blocks=1 00:09:12.381 --rc geninfo_unexecuted_blocks=1 00:09:12.381 00:09:12.381 ' 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:12.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.381 --rc genhtml_branch_coverage=1 00:09:12.381 --rc genhtml_function_coverage=1 00:09:12.381 --rc genhtml_legend=1 00:09:12.381 --rc geninfo_all_blocks=1 00:09:12.381 --rc geninfo_unexecuted_blocks=1 00:09:12.381 00:09:12.381 ' 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:12.381 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.382 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:12.382 Cannot find device "nvmf_init_br" 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:12.382 Cannot find device "nvmf_init_br2" 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:12.382 Cannot find device "nvmf_tgt_br" 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.382 Cannot find device "nvmf_tgt_br2" 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:12.382 Cannot find device "nvmf_init_br" 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:12.382 Cannot find device "nvmf_init_br2" 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:12.382 Cannot find device "nvmf_tgt_br" 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:12.382 Cannot find device "nvmf_tgt_br2" 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:12.382 Cannot find device "nvmf_br" 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:12.382 Cannot find device "nvmf_init_if" 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:12.382 Cannot find device "nvmf_init_if2" 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.382 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:12.382 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:12.642 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:12.643 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:12.643 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:09:12.643 00:09:12.643 --- 10.0.0.3 ping statistics --- 00:09:12.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.643 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:12.643 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:12.643 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:09:12.643 00:09:12.643 --- 10.0.0.4 ping statistics --- 00:09:12.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.643 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:12.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:12.643 00:09:12.643 --- 10.0.0.1 ping statistics --- 00:09:12.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.643 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:12.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:09:12.643 00:09:12.643 --- 10.0.0.2 ping statistics --- 00:09:12.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.643 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # return 0 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:09:12.643 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:12.902 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:12.902 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:12.902 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=66989 00:09:12.903 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:12.903 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 66989 00:09:12.903 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 66989 ']' 00:09:12.903 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.903 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.903 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.903 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.903 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67019 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=666f11549d53a6752a2d7c6ed9a3f953285fc8e54c75dd5f 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.woV 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 666f11549d53a6752a2d7c6ed9a3f953285fc8e54c75dd5f 0 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 666f11549d53a6752a2d7c6ed9a3f953285fc8e54c75dd5f 0 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=666f11549d53a6752a2d7c6ed9a3f953285fc8e54c75dd5f 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.woV 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.woV 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.woV 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=48ca70933e3351acb741b34d79cfebc153d0b2a9cec08d2f987e9534c1707070 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.ANI 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 48ca70933e3351acb741b34d79cfebc153d0b2a9cec08d2f987e9534c1707070 3 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 48ca70933e3351acb741b34d79cfebc153d0b2a9cec08d2f987e9534c1707070 3 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=48ca70933e3351acb741b34d79cfebc153d0b2a9cec08d2f987e9534c1707070 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:09:13.161 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:09:13.161 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.ANI 00:09:13.422 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.ANI 00:09:13.422 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.ANI 00:09:13.422 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:09:13.422 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:09:13.422 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=abb8988c19bf2003b23f0c4cc6d60871 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.cIe 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key abb8988c19bf2003b23f0c4cc6d60871 1 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 abb8988c19bf2003b23f0c4cc6d60871 1 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=abb8988c19bf2003b23f0c4cc6d60871 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.cIe 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.cIe 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.cIe 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=69daafd5ff6872d4fdfa7d3dfd9930f6a2ebba778fbd8dcd 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.8hS 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 69daafd5ff6872d4fdfa7d3dfd9930f6a2ebba778fbd8dcd 2 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 69daafd5ff6872d4fdfa7d3dfd9930f6a2ebba778fbd8dcd 2 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=69daafd5ff6872d4fdfa7d3dfd9930f6a2ebba778fbd8dcd 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.8hS 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.8hS 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.8hS 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=2e30d57b069441f64fac80180ec540cea0318c228dcc97d7 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.bPJ 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 2e30d57b069441f64fac80180ec540cea0318c228dcc97d7 2 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 2e30d57b069441f64fac80180ec540cea0318c228dcc97d7 2 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=2e30d57b069441f64fac80180ec540cea0318c228dcc97d7 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.bPJ 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.bPJ 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.bPJ 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=93b10ff4e7502f90c90631a9f5a9ca5e 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.YiY 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 93b10ff4e7502f90c90631a9f5a9ca5e 1 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 93b10ff4e7502f90c90631a9f5a9ca5e 1 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=93b10ff4e7502f90c90631a9f5a9ca5e 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:09:13.423 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.YiY 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.YiY 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.YiY 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=a25910a7f3e4f7c48663ad093ff255ba05f6274b107298c9c8436717bb6154c3 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.HQx 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key a25910a7f3e4f7c48663ad093ff255ba05f6274b107298c9c8436717bb6154c3 3 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 a25910a7f3e4f7c48663ad093ff255ba05f6274b107298c9c8436717bb6154c3 3 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=a25910a7f3e4f7c48663ad093ff255ba05f6274b107298c9c8436717bb6154c3 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.HQx 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.HQx 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.HQx 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 66989 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 66989 ']' 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.685 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.290 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.290 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:14.290 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67019 /var/tmp/host.sock 00:09:14.290 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67019 ']' 00:09:14.290 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:09:14.290 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:14.290 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:14.290 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.290 13:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.290 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.290 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:14.290 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:14.290 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.290 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.550 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.550 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:14.550 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.woV 00:09:14.550 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.550 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.550 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.550 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.woV 00:09:14.550 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.woV 00:09:14.809 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.ANI ]] 00:09:14.809 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ANI 00:09:14.809 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.809 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.809 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.809 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ANI 00:09:14.809 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ANI 00:09:15.068 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:15.068 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.cIe 00:09:15.068 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.068 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.068 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.068 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.cIe 00:09:15.068 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.cIe 00:09:15.327 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.8hS ]] 00:09:15.327 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8hS 00:09:15.327 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.327 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.327 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.327 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8hS 00:09:15.327 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8hS 00:09:15.894 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:15.894 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.bPJ 00:09:15.894 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.894 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.894 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.894 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.bPJ 00:09:15.894 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.bPJ 00:09:16.152 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.YiY ]] 00:09:16.152 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YiY 00:09:16.152 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.152 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.152 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.152 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YiY 00:09:16.152 13:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YiY 00:09:16.411 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:16.411 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.HQx 00:09:16.411 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.411 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.411 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.411 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.HQx 00:09:16.411 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.HQx 00:09:16.670 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:09:16.670 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:16.670 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:16.670 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:16.670 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:16.670 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:16.930 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:09:16.930 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:16.930 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:16.930 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:16.930 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:16.930 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:16.930 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:16.930 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.930 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.930 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.930 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:16.930 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:16.930 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:17.496 00:09:17.496 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:17.496 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:17.496 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:17.755 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:17.755 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:17.755 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.755 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.755 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.755 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:17.755 { 00:09:17.755 "cntlid": 1, 00:09:17.755 "qid": 0, 00:09:17.755 "state": "enabled", 00:09:17.755 "thread": "nvmf_tgt_poll_group_000", 00:09:17.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:17.755 "listen_address": { 00:09:17.755 "trtype": "TCP", 00:09:17.755 "adrfam": "IPv4", 00:09:17.755 "traddr": "10.0.0.3", 00:09:17.755 "trsvcid": "4420" 00:09:17.755 }, 00:09:17.755 "peer_address": { 00:09:17.755 "trtype": "TCP", 00:09:17.755 "adrfam": "IPv4", 00:09:17.755 "traddr": "10.0.0.1", 00:09:17.755 "trsvcid": "40468" 00:09:17.755 }, 00:09:17.755 "auth": { 00:09:17.755 "state": "completed", 00:09:17.755 "digest": "sha256", 00:09:17.755 "dhgroup": "null" 00:09:17.755 } 00:09:17.755 } 00:09:17.755 ]' 00:09:17.755 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:17.755 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:17.755 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:17.755 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:17.755 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:17.755 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:17.755 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:17.755 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:18.014 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:09:18.014 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:09:23.281 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:23.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:23.281 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:23.281 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.281 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.281 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.281 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:23.281 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:23.281 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:23.540 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:09:23.540 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:23.540 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:23.540 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:23.540 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:23.540 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:23.540 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:23.540 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.540 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.540 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.540 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:23.540 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:23.540 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:23.798 00:09:23.798 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:23.798 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:23.798 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:24.056 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:24.056 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:24.056 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.056 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.315 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.315 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:24.315 { 00:09:24.315 "cntlid": 3, 00:09:24.315 "qid": 0, 00:09:24.315 "state": "enabled", 00:09:24.315 "thread": "nvmf_tgt_poll_group_000", 00:09:24.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:24.315 "listen_address": { 00:09:24.315 "trtype": "TCP", 00:09:24.315 "adrfam": "IPv4", 00:09:24.315 "traddr": "10.0.0.3", 00:09:24.315 "trsvcid": "4420" 00:09:24.315 }, 00:09:24.315 "peer_address": { 00:09:24.315 "trtype": "TCP", 00:09:24.315 "adrfam": "IPv4", 00:09:24.315 "traddr": "10.0.0.1", 00:09:24.315 "trsvcid": "40494" 00:09:24.315 }, 00:09:24.315 "auth": { 00:09:24.315 "state": "completed", 00:09:24.315 "digest": "sha256", 00:09:24.315 "dhgroup": "null" 00:09:24.315 } 00:09:24.315 } 00:09:24.315 ]' 00:09:24.315 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:24.315 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:24.315 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:24.315 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:24.315 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:24.315 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:24.315 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:24.315 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:24.573 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:09:24.573 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:09:25.504 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:25.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:25.504 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:25.504 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.504 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.504 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.504 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:25.504 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:25.504 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:25.761 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:09:25.761 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:25.761 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:25.761 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:25.761 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:25.761 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:25.761 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:25.761 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.761 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.761 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.761 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:25.761 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:25.761 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:26.326 00:09:26.326 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:26.326 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:26.326 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:26.584 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:26.584 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:26.584 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.584 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:26.584 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.584 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:26.584 { 00:09:26.584 "cntlid": 5, 00:09:26.584 "qid": 0, 00:09:26.584 "state": "enabled", 00:09:26.584 "thread": "nvmf_tgt_poll_group_000", 00:09:26.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:26.584 "listen_address": { 00:09:26.584 "trtype": "TCP", 00:09:26.584 "adrfam": "IPv4", 00:09:26.584 "traddr": "10.0.0.3", 00:09:26.584 "trsvcid": "4420" 00:09:26.584 }, 00:09:26.584 "peer_address": { 00:09:26.584 "trtype": "TCP", 00:09:26.584 "adrfam": "IPv4", 00:09:26.584 "traddr": "10.0.0.1", 00:09:26.584 "trsvcid": "40902" 00:09:26.584 }, 00:09:26.584 "auth": { 00:09:26.584 "state": "completed", 00:09:26.584 "digest": "sha256", 00:09:26.584 "dhgroup": "null" 00:09:26.584 } 00:09:26.584 } 00:09:26.584 ]' 00:09:26.584 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:26.584 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:26.584 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:26.584 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:26.584 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:26.843 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:26.843 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:26.843 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:27.104 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:09:27.104 13:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:09:27.669 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:27.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:27.926 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:27.926 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.926 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:27.926 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.926 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:27.926 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:27.926 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:28.185 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:09:28.185 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:28.185 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:28.185 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:28.185 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:28.185 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:28.185 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:09:28.185 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.185 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.185 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.185 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:28.185 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:28.185 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:28.442 00:09:28.442 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:28.442 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:28.442 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:29.009 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:29.009 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:29.009 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.009 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.009 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.009 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:29.009 { 00:09:29.009 "cntlid": 7, 00:09:29.009 "qid": 0, 00:09:29.009 "state": "enabled", 00:09:29.009 "thread": "nvmf_tgt_poll_group_000", 00:09:29.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:29.009 "listen_address": { 00:09:29.009 "trtype": "TCP", 00:09:29.009 "adrfam": "IPv4", 00:09:29.009 "traddr": "10.0.0.3", 00:09:29.009 "trsvcid": "4420" 00:09:29.009 }, 00:09:29.009 "peer_address": { 00:09:29.009 "trtype": "TCP", 00:09:29.009 "adrfam": "IPv4", 00:09:29.009 "traddr": "10.0.0.1", 00:09:29.009 "trsvcid": "40948" 00:09:29.009 }, 00:09:29.009 "auth": { 00:09:29.009 "state": "completed", 00:09:29.009 "digest": "sha256", 00:09:29.009 "dhgroup": "null" 00:09:29.009 } 00:09:29.009 } 00:09:29.009 ]' 00:09:29.009 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:29.009 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:29.009 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:29.009 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:29.009 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:29.009 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:29.009 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:29.009 13:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:29.267 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:09:29.267 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:09:30.198 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:30.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:30.198 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:30.198 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.198 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.198 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.198 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:30.198 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:30.198 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:30.198 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:30.457 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:09:30.457 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:30.457 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:30.457 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:30.457 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:30.457 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:30.457 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:30.457 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.457 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.457 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.457 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:30.457 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:30.457 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:30.715 00:09:30.716 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:30.716 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:30.716 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:30.973 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:30.973 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:30.973 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.973 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.973 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.973 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:30.973 { 00:09:30.973 "cntlid": 9, 00:09:30.973 "qid": 0, 00:09:30.973 "state": "enabled", 00:09:30.973 "thread": "nvmf_tgt_poll_group_000", 00:09:30.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:30.973 "listen_address": { 00:09:30.973 "trtype": "TCP", 00:09:30.973 "adrfam": "IPv4", 00:09:30.973 "traddr": "10.0.0.3", 00:09:30.973 "trsvcid": "4420" 00:09:30.973 }, 00:09:30.973 "peer_address": { 00:09:30.973 "trtype": "TCP", 00:09:30.973 "adrfam": "IPv4", 00:09:30.973 "traddr": "10.0.0.1", 00:09:30.973 "trsvcid": "40990" 00:09:30.973 }, 00:09:30.973 "auth": { 00:09:30.973 "state": "completed", 00:09:30.973 "digest": "sha256", 00:09:30.973 "dhgroup": "ffdhe2048" 00:09:30.973 } 00:09:30.973 } 00:09:30.973 ]' 00:09:30.973 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:31.231 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:31.231 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:31.231 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:31.231 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:31.231 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:31.231 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:31.231 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:31.488 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:09:31.488 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:09:32.423 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:32.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:32.423 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:32.423 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.423 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.423 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.423 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:32.423 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:32.423 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:32.682 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:09:32.682 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:32.682 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:32.682 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:32.682 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:32.682 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:32.683 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:32.683 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.683 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.683 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.683 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:32.683 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:32.683 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:33.249 00:09:33.249 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:33.249 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:33.249 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:33.506 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:33.506 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:33.506 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.506 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.506 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.506 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:33.506 { 00:09:33.506 "cntlid": 11, 00:09:33.506 "qid": 0, 00:09:33.506 "state": "enabled", 00:09:33.506 "thread": "nvmf_tgt_poll_group_000", 00:09:33.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:33.506 "listen_address": { 00:09:33.506 "trtype": "TCP", 00:09:33.506 "adrfam": "IPv4", 00:09:33.506 "traddr": "10.0.0.3", 00:09:33.506 "trsvcid": "4420" 00:09:33.506 }, 00:09:33.506 "peer_address": { 00:09:33.506 "trtype": "TCP", 00:09:33.506 "adrfam": "IPv4", 00:09:33.506 "traddr": "10.0.0.1", 00:09:33.506 "trsvcid": "41012" 00:09:33.506 }, 00:09:33.506 "auth": { 00:09:33.506 "state": "completed", 00:09:33.506 "digest": "sha256", 00:09:33.506 "dhgroup": "ffdhe2048" 00:09:33.506 } 00:09:33.506 } 00:09:33.506 ]' 00:09:33.506 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:33.506 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:33.506 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:33.506 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:33.506 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:33.764 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:33.764 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:33.764 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:34.023 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:09:34.023 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:09:34.590 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:34.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:34.590 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:34.590 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.590 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.590 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.590 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:34.590 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:34.590 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:35.156 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:09:35.156 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:35.156 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:35.156 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:35.156 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:35.156 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:35.156 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:35.156 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.156 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.156 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.156 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:35.156 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:35.156 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:35.415 00:09:35.415 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:35.415 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:35.415 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:35.674 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:35.674 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:35.674 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.674 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.674 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.674 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:35.674 { 00:09:35.674 "cntlid": 13, 00:09:35.674 "qid": 0, 00:09:35.674 "state": "enabled", 00:09:35.674 "thread": "nvmf_tgt_poll_group_000", 00:09:35.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:35.674 "listen_address": { 00:09:35.674 "trtype": "TCP", 00:09:35.674 "adrfam": "IPv4", 00:09:35.674 "traddr": "10.0.0.3", 00:09:35.674 "trsvcid": "4420" 00:09:35.674 }, 00:09:35.674 "peer_address": { 00:09:35.674 "trtype": "TCP", 00:09:35.674 "adrfam": "IPv4", 00:09:35.674 "traddr": "10.0.0.1", 00:09:35.674 "trsvcid": "41018" 00:09:35.674 }, 00:09:35.674 "auth": { 00:09:35.674 "state": "completed", 00:09:35.674 "digest": "sha256", 00:09:35.674 "dhgroup": "ffdhe2048" 00:09:35.674 } 00:09:35.674 } 00:09:35.674 ]' 00:09:35.674 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:35.674 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:35.674 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:35.932 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:35.932 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:35.932 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:35.932 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:35.932 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:36.190 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:09:36.190 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:09:37.125 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:37.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:37.125 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:37.125 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.125 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.125 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.125 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:37.125 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:37.125 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:37.387 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:09:37.387 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:37.387 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:37.387 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:37.387 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:37.387 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:37.387 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:09:37.387 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.387 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.387 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.387 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:37.387 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:37.387 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:37.686 00:09:37.686 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:37.686 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:37.686 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:37.956 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:37.956 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:37.956 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.956 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.215 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.215 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:38.215 { 00:09:38.215 "cntlid": 15, 00:09:38.215 "qid": 0, 00:09:38.215 "state": "enabled", 00:09:38.215 "thread": "nvmf_tgt_poll_group_000", 00:09:38.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:38.215 "listen_address": { 00:09:38.215 "trtype": "TCP", 00:09:38.215 "adrfam": "IPv4", 00:09:38.215 "traddr": "10.0.0.3", 00:09:38.215 "trsvcid": "4420" 00:09:38.215 }, 00:09:38.215 "peer_address": { 00:09:38.215 "trtype": "TCP", 00:09:38.215 "adrfam": "IPv4", 00:09:38.215 "traddr": "10.0.0.1", 00:09:38.215 "trsvcid": "41984" 00:09:38.215 }, 00:09:38.215 "auth": { 00:09:38.215 "state": "completed", 00:09:38.215 "digest": "sha256", 00:09:38.215 "dhgroup": "ffdhe2048" 00:09:38.215 } 00:09:38.215 } 00:09:38.215 ]' 00:09:38.215 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:38.215 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:38.215 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:38.215 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:38.215 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:38.215 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:38.215 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:38.215 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:38.472 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:09:38.472 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:09:39.407 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:39.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:39.407 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:39.407 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.407 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.407 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.407 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:39.407 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:39.407 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:39.407 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:39.666 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:09:39.666 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:39.666 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:39.666 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:39.666 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:39.666 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:39.666 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:39.666 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.666 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.666 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.666 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:39.666 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:39.666 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:40.234 00:09:40.234 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:40.234 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:40.235 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:40.494 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:40.494 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:40.494 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.494 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.494 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.494 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:40.494 { 00:09:40.494 "cntlid": 17, 00:09:40.494 "qid": 0, 00:09:40.494 "state": "enabled", 00:09:40.494 "thread": "nvmf_tgt_poll_group_000", 00:09:40.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:40.494 "listen_address": { 00:09:40.494 "trtype": "TCP", 00:09:40.494 "adrfam": "IPv4", 00:09:40.494 "traddr": "10.0.0.3", 00:09:40.494 "trsvcid": "4420" 00:09:40.494 }, 00:09:40.494 "peer_address": { 00:09:40.494 "trtype": "TCP", 00:09:40.494 "adrfam": "IPv4", 00:09:40.494 "traddr": "10.0.0.1", 00:09:40.494 "trsvcid": "42014" 00:09:40.494 }, 00:09:40.494 "auth": { 00:09:40.494 "state": "completed", 00:09:40.494 "digest": "sha256", 00:09:40.494 "dhgroup": "ffdhe3072" 00:09:40.494 } 00:09:40.494 } 00:09:40.494 ]' 00:09:40.494 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:40.494 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:40.494 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:40.494 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:40.494 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:40.494 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:40.494 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:40.494 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:41.061 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:09:41.061 13:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:09:41.627 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:41.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:41.627 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:41.627 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.627 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.627 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.627 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:41.627 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:41.627 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:41.886 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:09:41.886 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:41.886 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:41.886 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:41.886 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:41.886 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:41.887 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:41.887 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.887 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.887 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.887 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:41.887 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:41.887 13:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:42.453 00:09:42.453 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:42.453 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:42.453 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:42.711 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:42.711 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:42.711 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.711 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.711 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.711 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:42.711 { 00:09:42.711 "cntlid": 19, 00:09:42.711 "qid": 0, 00:09:42.711 "state": "enabled", 00:09:42.711 "thread": "nvmf_tgt_poll_group_000", 00:09:42.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:42.711 "listen_address": { 00:09:42.711 "trtype": "TCP", 00:09:42.711 "adrfam": "IPv4", 00:09:42.711 "traddr": "10.0.0.3", 00:09:42.711 "trsvcid": "4420" 00:09:42.711 }, 00:09:42.711 "peer_address": { 00:09:42.711 "trtype": "TCP", 00:09:42.711 "adrfam": "IPv4", 00:09:42.711 "traddr": "10.0.0.1", 00:09:42.711 "trsvcid": "42038" 00:09:42.711 }, 00:09:42.711 "auth": { 00:09:42.711 "state": "completed", 00:09:42.711 "digest": "sha256", 00:09:42.711 "dhgroup": "ffdhe3072" 00:09:42.711 } 00:09:42.711 } 00:09:42.711 ]' 00:09:42.711 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:42.711 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:42.711 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:42.711 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:42.711 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:42.969 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:42.969 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:42.969 13:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:43.227 13:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:09:43.228 13:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:09:44.163 13:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:44.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:44.163 13:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:44.163 13:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.163 13:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.163 13:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.163 13:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:44.163 13:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:44.163 13:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:44.730 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:09:44.730 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:44.730 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:44.730 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:44.730 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:44.730 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:44.730 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:44.730 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.730 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.730 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.730 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:44.730 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:44.730 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:44.989 00:09:44.989 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:44.989 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:44.989 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:45.247 13:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:45.247 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:45.247 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.247 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.247 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.247 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:45.247 { 00:09:45.247 "cntlid": 21, 00:09:45.247 "qid": 0, 00:09:45.247 "state": "enabled", 00:09:45.247 "thread": "nvmf_tgt_poll_group_000", 00:09:45.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:45.247 "listen_address": { 00:09:45.247 "trtype": "TCP", 00:09:45.247 "adrfam": "IPv4", 00:09:45.247 "traddr": "10.0.0.3", 00:09:45.247 "trsvcid": "4420" 00:09:45.247 }, 00:09:45.247 "peer_address": { 00:09:45.247 "trtype": "TCP", 00:09:45.247 "adrfam": "IPv4", 00:09:45.247 "traddr": "10.0.0.1", 00:09:45.247 "trsvcid": "42062" 00:09:45.247 }, 00:09:45.247 "auth": { 00:09:45.247 "state": "completed", 00:09:45.247 "digest": "sha256", 00:09:45.247 "dhgroup": "ffdhe3072" 00:09:45.247 } 00:09:45.247 } 00:09:45.247 ]' 00:09:45.247 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:45.247 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:45.247 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:45.507 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:45.507 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:45.507 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:45.507 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:45.507 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:45.765 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:09:45.765 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:09:46.331 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:46.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:46.590 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:46.590 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.590 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.590 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.590 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:46.590 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:46.590 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:46.849 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:09:46.849 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:46.849 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:46.849 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:46.849 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:46.849 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:46.849 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:09:46.849 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.849 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.849 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.849 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:46.849 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:46.849 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:47.109 00:09:47.109 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:47.109 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:47.109 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:47.674 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:47.674 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:47.674 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.674 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.674 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.674 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:47.674 { 00:09:47.674 "cntlid": 23, 00:09:47.674 "qid": 0, 00:09:47.674 "state": "enabled", 00:09:47.674 "thread": "nvmf_tgt_poll_group_000", 00:09:47.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:47.674 "listen_address": { 00:09:47.674 "trtype": "TCP", 00:09:47.674 "adrfam": "IPv4", 00:09:47.674 "traddr": "10.0.0.3", 00:09:47.674 "trsvcid": "4420" 00:09:47.674 }, 00:09:47.674 "peer_address": { 00:09:47.674 "trtype": "TCP", 00:09:47.674 "adrfam": "IPv4", 00:09:47.674 "traddr": "10.0.0.1", 00:09:47.674 "trsvcid": "36512" 00:09:47.674 }, 00:09:47.674 "auth": { 00:09:47.674 "state": "completed", 00:09:47.674 "digest": "sha256", 00:09:47.674 "dhgroup": "ffdhe3072" 00:09:47.674 } 00:09:47.674 } 00:09:47.674 ]' 00:09:47.674 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:47.674 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:47.674 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:47.674 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:47.674 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:47.674 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:47.674 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:47.674 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:47.933 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:09:47.933 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:09:48.868 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:48.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:48.868 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:48.868 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.868 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.868 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.868 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:48.868 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:48.868 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:48.868 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:49.127 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:09:49.127 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:49.127 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:49.127 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:49.127 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:49.127 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:49.127 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.127 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.127 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.385 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.385 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.385 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.385 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.643 00:09:49.643 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:49.644 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:49.644 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:50.212 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:50.212 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:50.212 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.212 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.212 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.212 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:50.212 { 00:09:50.212 "cntlid": 25, 00:09:50.212 "qid": 0, 00:09:50.212 "state": "enabled", 00:09:50.212 "thread": "nvmf_tgt_poll_group_000", 00:09:50.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:50.212 "listen_address": { 00:09:50.212 "trtype": "TCP", 00:09:50.212 "adrfam": "IPv4", 00:09:50.212 "traddr": "10.0.0.3", 00:09:50.212 "trsvcid": "4420" 00:09:50.212 }, 00:09:50.212 "peer_address": { 00:09:50.212 "trtype": "TCP", 00:09:50.212 "adrfam": "IPv4", 00:09:50.212 "traddr": "10.0.0.1", 00:09:50.212 "trsvcid": "36546" 00:09:50.212 }, 00:09:50.212 "auth": { 00:09:50.212 "state": "completed", 00:09:50.212 "digest": "sha256", 00:09:50.212 "dhgroup": "ffdhe4096" 00:09:50.212 } 00:09:50.212 } 00:09:50.212 ]' 00:09:50.212 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:50.212 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:50.212 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:50.212 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:50.212 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:50.212 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:50.212 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:50.212 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:50.471 13:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:09:50.471 13:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:09:51.408 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:51.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:51.408 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:51.408 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.408 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.408 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.408 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:51.408 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:51.408 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:51.666 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:09:51.666 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:51.666 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:51.666 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:51.666 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:51.666 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:51.666 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:51.666 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.666 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.666 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.666 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:51.666 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:51.666 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:52.233 00:09:52.233 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:52.233 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:52.233 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:52.490 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:52.491 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:52.491 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.491 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.491 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.491 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:52.491 { 00:09:52.491 "cntlid": 27, 00:09:52.491 "qid": 0, 00:09:52.491 "state": "enabled", 00:09:52.491 "thread": "nvmf_tgt_poll_group_000", 00:09:52.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:52.491 "listen_address": { 00:09:52.491 "trtype": "TCP", 00:09:52.491 "adrfam": "IPv4", 00:09:52.491 "traddr": "10.0.0.3", 00:09:52.491 "trsvcid": "4420" 00:09:52.491 }, 00:09:52.491 "peer_address": { 00:09:52.491 "trtype": "TCP", 00:09:52.491 "adrfam": "IPv4", 00:09:52.491 "traddr": "10.0.0.1", 00:09:52.491 "trsvcid": "36556" 00:09:52.491 }, 00:09:52.491 "auth": { 00:09:52.491 "state": "completed", 00:09:52.491 "digest": "sha256", 00:09:52.491 "dhgroup": "ffdhe4096" 00:09:52.491 } 00:09:52.491 } 00:09:52.491 ]' 00:09:52.491 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:52.491 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:52.491 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:52.749 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:52.749 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:52.749 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:52.749 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:52.749 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:53.007 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:09:53.007 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:53.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:53.941 13:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:54.509 00:09:54.509 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:54.509 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:54.509 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:54.767 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:54.767 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:54.767 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.767 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.767 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.767 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:54.767 { 00:09:54.767 "cntlid": 29, 00:09:54.767 "qid": 0, 00:09:54.767 "state": "enabled", 00:09:54.767 "thread": "nvmf_tgt_poll_group_000", 00:09:54.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:54.767 "listen_address": { 00:09:54.767 "trtype": "TCP", 00:09:54.767 "adrfam": "IPv4", 00:09:54.767 "traddr": "10.0.0.3", 00:09:54.767 "trsvcid": "4420" 00:09:54.767 }, 00:09:54.767 "peer_address": { 00:09:54.767 "trtype": "TCP", 00:09:54.767 "adrfam": "IPv4", 00:09:54.767 "traddr": "10.0.0.1", 00:09:54.767 "trsvcid": "36588" 00:09:54.767 }, 00:09:54.767 "auth": { 00:09:54.767 "state": "completed", 00:09:54.767 "digest": "sha256", 00:09:54.767 "dhgroup": "ffdhe4096" 00:09:54.767 } 00:09:54.767 } 00:09:54.767 ]' 00:09:54.767 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:54.767 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:54.767 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:55.025 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:55.025 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:55.025 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:55.025 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:55.025 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:55.283 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:09:55.283 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:09:56.218 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:56.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:56.218 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:56.218 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.218 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.218 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.218 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:56.218 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:56.218 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:56.477 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:09:56.477 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:56.477 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:56.477 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:56.477 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:56.477 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:56.477 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:09:56.477 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.477 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.477 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.477 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:56.477 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:56.477 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:56.736 00:09:56.995 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:56.995 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:56.995 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:57.253 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:57.253 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:57.253 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.253 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.253 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.253 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:57.253 { 00:09:57.253 "cntlid": 31, 00:09:57.253 "qid": 0, 00:09:57.253 "state": "enabled", 00:09:57.253 "thread": "nvmf_tgt_poll_group_000", 00:09:57.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:57.253 "listen_address": { 00:09:57.253 "trtype": "TCP", 00:09:57.253 "adrfam": "IPv4", 00:09:57.253 "traddr": "10.0.0.3", 00:09:57.253 "trsvcid": "4420" 00:09:57.253 }, 00:09:57.253 "peer_address": { 00:09:57.253 "trtype": "TCP", 00:09:57.253 "adrfam": "IPv4", 00:09:57.253 "traddr": "10.0.0.1", 00:09:57.253 "trsvcid": "38368" 00:09:57.253 }, 00:09:57.253 "auth": { 00:09:57.253 "state": "completed", 00:09:57.253 "digest": "sha256", 00:09:57.253 "dhgroup": "ffdhe4096" 00:09:57.253 } 00:09:57.253 } 00:09:57.253 ]' 00:09:57.253 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:57.253 13:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:57.253 13:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:57.253 13:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:57.253 13:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:57.512 13:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:57.512 13:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:57.512 13:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:57.771 13:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:09:57.771 13:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:09:58.337 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:58.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:58.596 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:09:58.596 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.596 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.596 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.596 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:58.596 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:58.596 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:58.596 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:58.856 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:09:58.856 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:58.856 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:58.856 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:58.856 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:58.856 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:58.856 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:58.856 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.856 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.856 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.856 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:58.856 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:58.856 13:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.444 00:09:59.444 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:59.444 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:59.444 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:59.703 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:59.703 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:59.703 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.703 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.703 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.703 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:59.703 { 00:09:59.703 "cntlid": 33, 00:09:59.703 "qid": 0, 00:09:59.703 "state": "enabled", 00:09:59.703 "thread": "nvmf_tgt_poll_group_000", 00:09:59.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:09:59.703 "listen_address": { 00:09:59.703 "trtype": "TCP", 00:09:59.703 "adrfam": "IPv4", 00:09:59.703 "traddr": "10.0.0.3", 00:09:59.703 "trsvcid": "4420" 00:09:59.703 }, 00:09:59.703 "peer_address": { 00:09:59.703 "trtype": "TCP", 00:09:59.703 "adrfam": "IPv4", 00:09:59.703 "traddr": "10.0.0.1", 00:09:59.703 "trsvcid": "38388" 00:09:59.703 }, 00:09:59.703 "auth": { 00:09:59.703 "state": "completed", 00:09:59.703 "digest": "sha256", 00:09:59.703 "dhgroup": "ffdhe6144" 00:09:59.703 } 00:09:59.703 } 00:09:59.703 ]' 00:09:59.703 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:59.703 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:59.703 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:59.703 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:59.703 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:59.962 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:59.962 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:59.962 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.221 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:10:00.221 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:10:00.789 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:00.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:00.789 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:00.789 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.789 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.789 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.789 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:00.789 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:00.789 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:01.370 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:01.370 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:01.370 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:01.370 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:01.370 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:01.370 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:01.370 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.370 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.370 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.370 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.370 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.370 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.370 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.640 00:10:01.640 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:01.640 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:01.640 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:02.206 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:02.206 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:02.206 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.206 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.206 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.206 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:02.206 { 00:10:02.206 "cntlid": 35, 00:10:02.206 "qid": 0, 00:10:02.206 "state": "enabled", 00:10:02.206 "thread": "nvmf_tgt_poll_group_000", 00:10:02.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:02.206 "listen_address": { 00:10:02.206 "trtype": "TCP", 00:10:02.206 "adrfam": "IPv4", 00:10:02.206 "traddr": "10.0.0.3", 00:10:02.206 "trsvcid": "4420" 00:10:02.206 }, 00:10:02.206 "peer_address": { 00:10:02.206 "trtype": "TCP", 00:10:02.206 "adrfam": "IPv4", 00:10:02.206 "traddr": "10.0.0.1", 00:10:02.206 "trsvcid": "38394" 00:10:02.206 }, 00:10:02.206 "auth": { 00:10:02.206 "state": "completed", 00:10:02.206 "digest": "sha256", 00:10:02.206 "dhgroup": "ffdhe6144" 00:10:02.206 } 00:10:02.206 } 00:10:02.206 ]' 00:10:02.206 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:02.206 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:02.206 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:02.206 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:02.206 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:02.206 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:02.206 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:02.206 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:02.773 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:10:02.773 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:10:03.339 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:03.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:03.339 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:03.339 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.339 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.339 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.339 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:03.339 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:03.339 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:03.905 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:03.905 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:03.905 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:03.905 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:03.905 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:03.905 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:03.905 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.905 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.905 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.905 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.905 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.906 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.906 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:04.163 00:10:04.422 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:04.422 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:04.422 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:04.681 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:04.681 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:04.681 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.681 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.681 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.681 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:04.681 { 00:10:04.681 "cntlid": 37, 00:10:04.681 "qid": 0, 00:10:04.681 "state": "enabled", 00:10:04.681 "thread": "nvmf_tgt_poll_group_000", 00:10:04.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:04.681 "listen_address": { 00:10:04.681 "trtype": "TCP", 00:10:04.681 "adrfam": "IPv4", 00:10:04.681 "traddr": "10.0.0.3", 00:10:04.681 "trsvcid": "4420" 00:10:04.681 }, 00:10:04.681 "peer_address": { 00:10:04.681 "trtype": "TCP", 00:10:04.681 "adrfam": "IPv4", 00:10:04.681 "traddr": "10.0.0.1", 00:10:04.681 "trsvcid": "38426" 00:10:04.681 }, 00:10:04.681 "auth": { 00:10:04.681 "state": "completed", 00:10:04.681 "digest": "sha256", 00:10:04.681 "dhgroup": "ffdhe6144" 00:10:04.681 } 00:10:04.681 } 00:10:04.681 ]' 00:10:04.681 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:04.681 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:04.681 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:04.681 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:04.681 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:04.681 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:04.681 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:04.681 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:05.248 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:10:05.248 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:10:05.813 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.813 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:05.813 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.813 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.813 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.813 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:05.813 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:05.813 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:06.387 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:06.387 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:06.387 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:06.387 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:06.387 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:06.387 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:06.387 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:10:06.387 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.387 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.387 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.387 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:06.387 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:06.387 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:06.663 00:10:06.663 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:06.663 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:06.663 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:07.229 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:07.229 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:07.229 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.229 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.229 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.229 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:07.229 { 00:10:07.229 "cntlid": 39, 00:10:07.229 "qid": 0, 00:10:07.229 "state": "enabled", 00:10:07.229 "thread": "nvmf_tgt_poll_group_000", 00:10:07.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:07.229 "listen_address": { 00:10:07.229 "trtype": "TCP", 00:10:07.229 "adrfam": "IPv4", 00:10:07.229 "traddr": "10.0.0.3", 00:10:07.229 "trsvcid": "4420" 00:10:07.229 }, 00:10:07.229 "peer_address": { 00:10:07.229 "trtype": "TCP", 00:10:07.229 "adrfam": "IPv4", 00:10:07.229 "traddr": "10.0.0.1", 00:10:07.229 "trsvcid": "55646" 00:10:07.229 }, 00:10:07.229 "auth": { 00:10:07.229 "state": "completed", 00:10:07.229 "digest": "sha256", 00:10:07.229 "dhgroup": "ffdhe6144" 00:10:07.229 } 00:10:07.229 } 00:10:07.229 ]' 00:10:07.229 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:07.229 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:07.229 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:07.229 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:07.229 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:07.229 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:07.229 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:07.229 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:07.487 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:10:07.487 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:10:08.420 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:08.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:08.420 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:08.420 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.420 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.420 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.420 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:08.420 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:08.420 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:08.420 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:08.679 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:08.679 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:08.679 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:08.679 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:08.679 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:08.679 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:08.679 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:08.679 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.679 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.679 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.679 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:08.679 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:08.679 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.613 00:10:09.613 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:09.613 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:09.613 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:09.613 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:09.613 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:09.613 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.613 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.613 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.613 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:09.613 { 00:10:09.613 "cntlid": 41, 00:10:09.613 "qid": 0, 00:10:09.613 "state": "enabled", 00:10:09.613 "thread": "nvmf_tgt_poll_group_000", 00:10:09.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:09.613 "listen_address": { 00:10:09.613 "trtype": "TCP", 00:10:09.613 "adrfam": "IPv4", 00:10:09.613 "traddr": "10.0.0.3", 00:10:09.613 "trsvcid": "4420" 00:10:09.613 }, 00:10:09.613 "peer_address": { 00:10:09.613 "trtype": "TCP", 00:10:09.613 "adrfam": "IPv4", 00:10:09.613 "traddr": "10.0.0.1", 00:10:09.613 "trsvcid": "55662" 00:10:09.613 }, 00:10:09.613 "auth": { 00:10:09.613 "state": "completed", 00:10:09.613 "digest": "sha256", 00:10:09.613 "dhgroup": "ffdhe8192" 00:10:09.613 } 00:10:09.613 } 00:10:09.613 ]' 00:10:09.613 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:09.872 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:09.872 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:09.872 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:09.872 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:09.872 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:09.872 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:09.872 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.130 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:10:10.130 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:10:11.066 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.066 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:11.066 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.066 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.066 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.066 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:11.066 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:11.066 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:11.634 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:11.634 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:11.634 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:11.634 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:11.634 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:11.634 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.634 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.634 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.634 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.634 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.634 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.634 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.634 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:12.201 00:10:12.201 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:12.201 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.201 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:12.768 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.768 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.768 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.768 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.768 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.768 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:12.768 { 00:10:12.768 "cntlid": 43, 00:10:12.768 "qid": 0, 00:10:12.768 "state": "enabled", 00:10:12.768 "thread": "nvmf_tgt_poll_group_000", 00:10:12.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:12.768 "listen_address": { 00:10:12.768 "trtype": "TCP", 00:10:12.768 "adrfam": "IPv4", 00:10:12.768 "traddr": "10.0.0.3", 00:10:12.768 "trsvcid": "4420" 00:10:12.768 }, 00:10:12.768 "peer_address": { 00:10:12.768 "trtype": "TCP", 00:10:12.768 "adrfam": "IPv4", 00:10:12.768 "traddr": "10.0.0.1", 00:10:12.768 "trsvcid": "55686" 00:10:12.768 }, 00:10:12.768 "auth": { 00:10:12.768 "state": "completed", 00:10:12.768 "digest": "sha256", 00:10:12.768 "dhgroup": "ffdhe8192" 00:10:12.768 } 00:10:12.768 } 00:10:12.768 ]' 00:10:12.768 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:12.768 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.768 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:12.768 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:12.768 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:12.768 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.768 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.768 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:13.334 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:10:13.334 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:10:14.267 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.267 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:14.267 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.267 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.267 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.267 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:14.267 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:14.267 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:14.525 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:14.525 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:14.525 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:14.525 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:14.525 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:14.525 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:14.525 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:14.525 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.525 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.525 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.525 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:14.525 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:14.525 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.458 00:10:15.458 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:15.458 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:15.458 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.716 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.716 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.716 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.716 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.716 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.716 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:15.716 { 00:10:15.716 "cntlid": 45, 00:10:15.716 "qid": 0, 00:10:15.716 "state": "enabled", 00:10:15.716 "thread": "nvmf_tgt_poll_group_000", 00:10:15.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:15.716 "listen_address": { 00:10:15.716 "trtype": "TCP", 00:10:15.716 "adrfam": "IPv4", 00:10:15.716 "traddr": "10.0.0.3", 00:10:15.716 "trsvcid": "4420" 00:10:15.716 }, 00:10:15.716 "peer_address": { 00:10:15.716 "trtype": "TCP", 00:10:15.716 "adrfam": "IPv4", 00:10:15.716 "traddr": "10.0.0.1", 00:10:15.716 "trsvcid": "55710" 00:10:15.716 }, 00:10:15.716 "auth": { 00:10:15.716 "state": "completed", 00:10:15.716 "digest": "sha256", 00:10:15.716 "dhgroup": "ffdhe8192" 00:10:15.716 } 00:10:15.716 } 00:10:15.716 ]' 00:10:15.716 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:15.716 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:15.716 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:15.716 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:15.716 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:15.716 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:15.716 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:15.716 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.285 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:10:16.285 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:10:16.851 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.851 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:16.851 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.851 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.851 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.851 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:16.851 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:16.851 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:17.109 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:17.109 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:17.110 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:17.110 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:17.110 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:17.110 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.110 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:10:17.110 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.110 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.110 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.110 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:17.110 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:17.110 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:18.041 00:10:18.041 13:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:18.041 13:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:18.041 13:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.300 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.300 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.300 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.300 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.300 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.300 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:18.300 { 00:10:18.300 "cntlid": 47, 00:10:18.300 "qid": 0, 00:10:18.300 "state": "enabled", 00:10:18.300 "thread": "nvmf_tgt_poll_group_000", 00:10:18.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:18.300 "listen_address": { 00:10:18.300 "trtype": "TCP", 00:10:18.300 "adrfam": "IPv4", 00:10:18.300 "traddr": "10.0.0.3", 00:10:18.300 "trsvcid": "4420" 00:10:18.300 }, 00:10:18.300 "peer_address": { 00:10:18.300 "trtype": "TCP", 00:10:18.300 "adrfam": "IPv4", 00:10:18.300 "traddr": "10.0.0.1", 00:10:18.300 "trsvcid": "55144" 00:10:18.300 }, 00:10:18.300 "auth": { 00:10:18.300 "state": "completed", 00:10:18.300 "digest": "sha256", 00:10:18.300 "dhgroup": "ffdhe8192" 00:10:18.300 } 00:10:18.300 } 00:10:18.300 ]' 00:10:18.300 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:18.300 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.300 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:18.300 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:18.300 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:18.559 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.559 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.559 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.817 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:10:18.817 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:10:19.384 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.384 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:19.384 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.384 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.384 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.384 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:19.384 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:19.384 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:19.384 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:19.384 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:19.642 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:10:19.642 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.642 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:19.642 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:19.642 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:19.643 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.643 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.643 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.643 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.643 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.643 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.643 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.643 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:20.233 00:10:20.233 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:20.233 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:20.233 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.491 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.491 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.491 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.491 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.491 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.491 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:20.491 { 00:10:20.491 "cntlid": 49, 00:10:20.491 "qid": 0, 00:10:20.491 "state": "enabled", 00:10:20.491 "thread": "nvmf_tgt_poll_group_000", 00:10:20.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:20.491 "listen_address": { 00:10:20.491 "trtype": "TCP", 00:10:20.491 "adrfam": "IPv4", 00:10:20.491 "traddr": "10.0.0.3", 00:10:20.491 "trsvcid": "4420" 00:10:20.491 }, 00:10:20.491 "peer_address": { 00:10:20.491 "trtype": "TCP", 00:10:20.491 "adrfam": "IPv4", 00:10:20.491 "traddr": "10.0.0.1", 00:10:20.491 "trsvcid": "55174" 00:10:20.491 }, 00:10:20.491 "auth": { 00:10:20.491 "state": "completed", 00:10:20.491 "digest": "sha384", 00:10:20.491 "dhgroup": "null" 00:10:20.491 } 00:10:20.491 } 00:10:20.491 ]' 00:10:20.491 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:20.749 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:20.749 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:20.749 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:20.749 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:20.749 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.749 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.749 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.007 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:10:21.007 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:10:21.940 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.940 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:21.940 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.940 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.940 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.940 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:21.940 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:21.940 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:22.198 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:10:22.198 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:22.198 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:22.198 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:22.198 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:22.198 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.198 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.198 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.198 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.198 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.198 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.198 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.198 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.456 00:10:22.456 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:22.456 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.456 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:23.023 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:23.023 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:23.023 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.023 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.023 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.023 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:23.023 { 00:10:23.023 "cntlid": 51, 00:10:23.023 "qid": 0, 00:10:23.023 "state": "enabled", 00:10:23.023 "thread": "nvmf_tgt_poll_group_000", 00:10:23.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:23.023 "listen_address": { 00:10:23.023 "trtype": "TCP", 00:10:23.023 "adrfam": "IPv4", 00:10:23.023 "traddr": "10.0.0.3", 00:10:23.023 "trsvcid": "4420" 00:10:23.023 }, 00:10:23.023 "peer_address": { 00:10:23.023 "trtype": "TCP", 00:10:23.023 "adrfam": "IPv4", 00:10:23.023 "traddr": "10.0.0.1", 00:10:23.023 "trsvcid": "55208" 00:10:23.023 }, 00:10:23.023 "auth": { 00:10:23.023 "state": "completed", 00:10:23.023 "digest": "sha384", 00:10:23.023 "dhgroup": "null" 00:10:23.023 } 00:10:23.023 } 00:10:23.023 ]' 00:10:23.023 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:23.023 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:23.023 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:23.023 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:23.023 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:23.023 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.023 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.023 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.282 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:10:23.282 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:10:24.216 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.216 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:24.216 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.217 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.217 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.217 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:24.217 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:24.217 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:24.474 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:10:24.474 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:24.474 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:24.474 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:24.474 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:24.474 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.474 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.474 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.474 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.474 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.474 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.474 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.474 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.041 00:10:25.041 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:25.041 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.041 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:25.299 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.299 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.299 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.299 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.299 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.299 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:25.299 { 00:10:25.299 "cntlid": 53, 00:10:25.299 "qid": 0, 00:10:25.299 "state": "enabled", 00:10:25.299 "thread": "nvmf_tgt_poll_group_000", 00:10:25.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:25.299 "listen_address": { 00:10:25.299 "trtype": "TCP", 00:10:25.299 "adrfam": "IPv4", 00:10:25.299 "traddr": "10.0.0.3", 00:10:25.299 "trsvcid": "4420" 00:10:25.299 }, 00:10:25.299 "peer_address": { 00:10:25.299 "trtype": "TCP", 00:10:25.299 "adrfam": "IPv4", 00:10:25.299 "traddr": "10.0.0.1", 00:10:25.299 "trsvcid": "55232" 00:10:25.299 }, 00:10:25.299 "auth": { 00:10:25.299 "state": "completed", 00:10:25.299 "digest": "sha384", 00:10:25.299 "dhgroup": "null" 00:10:25.299 } 00:10:25.299 } 00:10:25.299 ]' 00:10:25.300 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:25.300 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:25.300 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.300 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:25.300 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.300 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.300 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.300 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.865 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:10:25.865 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:10:26.431 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.431 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:26.431 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.431 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.431 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.431 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:26.431 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:26.431 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:26.999 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:10:26.999 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:26.999 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:26.999 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:26.999 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:26.999 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.999 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:10:26.999 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.999 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.999 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.999 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:26.999 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:26.999 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:27.565 00:10:27.565 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:27.565 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.565 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:28.132 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.132 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.132 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.132 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.132 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.132 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:28.132 { 00:10:28.132 "cntlid": 55, 00:10:28.132 "qid": 0, 00:10:28.132 "state": "enabled", 00:10:28.132 "thread": "nvmf_tgt_poll_group_000", 00:10:28.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:28.132 "listen_address": { 00:10:28.132 "trtype": "TCP", 00:10:28.132 "adrfam": "IPv4", 00:10:28.132 "traddr": "10.0.0.3", 00:10:28.132 "trsvcid": "4420" 00:10:28.132 }, 00:10:28.132 "peer_address": { 00:10:28.132 "trtype": "TCP", 00:10:28.132 "adrfam": "IPv4", 00:10:28.132 "traddr": "10.0.0.1", 00:10:28.132 "trsvcid": "43198" 00:10:28.132 }, 00:10:28.132 "auth": { 00:10:28.132 "state": "completed", 00:10:28.132 "digest": "sha384", 00:10:28.132 "dhgroup": "null" 00:10:28.132 } 00:10:28.132 } 00:10:28.132 ]' 00:10:28.132 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:28.132 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:28.132 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:28.132 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:28.132 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:28.132 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.132 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.132 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.700 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:10:28.700 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:10:31.287 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.287 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:31.287 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.287 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.287 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.287 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:31.287 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:31.287 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:31.287 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:31.546 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:10:31.546 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:31.546 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:31.546 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:31.546 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:31.546 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.547 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.547 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.547 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.547 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.547 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.547 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.547 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.481 00:10:32.481 13:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.481 13:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:32.481 13:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:33.046 13:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.046 13:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.046 13:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.046 13:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.046 13:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.046 13:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:33.046 { 00:10:33.046 "cntlid": 57, 00:10:33.046 "qid": 0, 00:10:33.046 "state": "enabled", 00:10:33.046 "thread": "nvmf_tgt_poll_group_000", 00:10:33.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:33.046 "listen_address": { 00:10:33.046 "trtype": "TCP", 00:10:33.046 "adrfam": "IPv4", 00:10:33.046 "traddr": "10.0.0.3", 00:10:33.046 "trsvcid": "4420" 00:10:33.046 }, 00:10:33.046 "peer_address": { 00:10:33.046 "trtype": "TCP", 00:10:33.046 "adrfam": "IPv4", 00:10:33.046 "traddr": "10.0.0.1", 00:10:33.046 "trsvcid": "43232" 00:10:33.046 }, 00:10:33.046 "auth": { 00:10:33.046 "state": "completed", 00:10:33.046 "digest": "sha384", 00:10:33.046 "dhgroup": "ffdhe2048" 00:10:33.046 } 00:10:33.046 } 00:10:33.046 ]' 00:10:33.046 13:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:33.303 13:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:33.303 13:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:33.303 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:33.303 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:33.560 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.560 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.560 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.818 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:10:33.818 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:10:35.815 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.815 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:35.815 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.815 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.815 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.815 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.815 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:35.815 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:36.381 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:10:36.381 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:36.381 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:36.381 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:36.381 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:36.381 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.381 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:36.381 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.381 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.381 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.381 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:36.382 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:36.382 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.316 00:10:37.316 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:37.316 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.316 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:37.880 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.880 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.880 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.880 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.880 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.880 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:37.880 { 00:10:37.880 "cntlid": 59, 00:10:37.880 "qid": 0, 00:10:37.880 "state": "enabled", 00:10:37.880 "thread": "nvmf_tgt_poll_group_000", 00:10:37.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:37.880 "listen_address": { 00:10:37.880 "trtype": "TCP", 00:10:37.880 "adrfam": "IPv4", 00:10:37.880 "traddr": "10.0.0.3", 00:10:37.880 "trsvcid": "4420" 00:10:37.880 }, 00:10:37.880 "peer_address": { 00:10:37.880 "trtype": "TCP", 00:10:37.880 "adrfam": "IPv4", 00:10:37.880 "traddr": "10.0.0.1", 00:10:37.880 "trsvcid": "55354" 00:10:37.880 }, 00:10:37.880 "auth": { 00:10:37.880 "state": "completed", 00:10:37.880 "digest": "sha384", 00:10:37.880 "dhgroup": "ffdhe2048" 00:10:37.880 } 00:10:37.880 } 00:10:37.880 ]' 00:10:37.880 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:38.137 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:38.137 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:38.137 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:38.137 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:38.138 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.138 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.138 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.702 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:10:38.702 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:10:40.603 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.603 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:40.603 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.603 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.603 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.603 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:40.603 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:40.603 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:40.861 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:10:40.861 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:40.861 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:40.861 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:40.861 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:40.861 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.861 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.861 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.861 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.861 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.861 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.861 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.861 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.795 00:10:41.795 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:41.795 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.795 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:42.360 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.360 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.360 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.360 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.360 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.360 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:42.360 { 00:10:42.360 "cntlid": 61, 00:10:42.360 "qid": 0, 00:10:42.360 "state": "enabled", 00:10:42.360 "thread": "nvmf_tgt_poll_group_000", 00:10:42.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:42.360 "listen_address": { 00:10:42.360 "trtype": "TCP", 00:10:42.360 "adrfam": "IPv4", 00:10:42.360 "traddr": "10.0.0.3", 00:10:42.360 "trsvcid": "4420" 00:10:42.360 }, 00:10:42.360 "peer_address": { 00:10:42.360 "trtype": "TCP", 00:10:42.360 "adrfam": "IPv4", 00:10:42.360 "traddr": "10.0.0.1", 00:10:42.360 "trsvcid": "55370" 00:10:42.360 }, 00:10:42.360 "auth": { 00:10:42.360 "state": "completed", 00:10:42.360 "digest": "sha384", 00:10:42.360 "dhgroup": "ffdhe2048" 00:10:42.360 } 00:10:42.360 } 00:10:42.360 ]' 00:10:42.360 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:42.360 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:42.360 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:42.360 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:42.360 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:42.360 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.360 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.360 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.926 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:10:42.926 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:10:44.828 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.829 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:44.829 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.829 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.829 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.829 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:44.829 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:44.829 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:45.396 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:10:45.396 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:45.396 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:45.396 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:45.396 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:45.396 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.396 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:10:45.396 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.396 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.396 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.396 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:45.396 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:45.396 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:45.963 00:10:45.963 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:45.963 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.963 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.530 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.530 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.530 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.530 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:46.530 { 00:10:46.530 "cntlid": 63, 00:10:46.530 "qid": 0, 00:10:46.530 "state": "enabled", 00:10:46.530 "thread": "nvmf_tgt_poll_group_000", 00:10:46.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:46.530 "listen_address": { 00:10:46.530 "trtype": "TCP", 00:10:46.530 "adrfam": "IPv4", 00:10:46.530 "traddr": "10.0.0.3", 00:10:46.530 "trsvcid": "4420" 00:10:46.530 }, 00:10:46.530 "peer_address": { 00:10:46.530 "trtype": "TCP", 00:10:46.530 "adrfam": "IPv4", 00:10:46.530 "traddr": "10.0.0.1", 00:10:46.530 "trsvcid": "55380" 00:10:46.530 }, 00:10:46.530 "auth": { 00:10:46.530 "state": "completed", 00:10:46.530 "digest": "sha384", 00:10:46.530 "dhgroup": "ffdhe2048" 00:10:46.530 } 00:10:46.530 } 00:10:46.530 ]' 00:10:46.530 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:46.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:46.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:46.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:46.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:46.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.352 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:10:47.352 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:10:49.266 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.266 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:49.266 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.266 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.266 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.266 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:49.266 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:49.266 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:49.266 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:49.524 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:10:49.524 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:49.524 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:49.524 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:49.524 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:49.524 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.524 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.524 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.524 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.802 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.802 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.802 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.802 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:51.206 00:10:51.206 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:51.206 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.206 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.465 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.465 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.465 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.465 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.465 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.465 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.465 { 00:10:51.465 "cntlid": 65, 00:10:51.465 "qid": 0, 00:10:51.465 "state": "enabled", 00:10:51.465 "thread": "nvmf_tgt_poll_group_000", 00:10:51.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:51.465 "listen_address": { 00:10:51.465 "trtype": "TCP", 00:10:51.465 "adrfam": "IPv4", 00:10:51.465 "traddr": "10.0.0.3", 00:10:51.465 "trsvcid": "4420" 00:10:51.465 }, 00:10:51.465 "peer_address": { 00:10:51.465 "trtype": "TCP", 00:10:51.465 "adrfam": "IPv4", 00:10:51.465 "traddr": "10.0.0.1", 00:10:51.465 "trsvcid": "43568" 00:10:51.465 }, 00:10:51.465 "auth": { 00:10:51.465 "state": "completed", 00:10:51.465 "digest": "sha384", 00:10:51.465 "dhgroup": "ffdhe3072" 00:10:51.465 } 00:10:51.465 } 00:10:51.465 ]' 00:10:51.465 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.723 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:51.723 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.723 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:51.723 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.981 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.981 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.981 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.555 13:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:10:52.555 13:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:10:55.119 13:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.119 13:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:10:55.119 13:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.119 13:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.119 13:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.119 13:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:55.119 13:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:55.119 13:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:55.687 13:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:10:55.687 13:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:55.687 13:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:55.687 13:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:55.687 13:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:55.687 13:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.687 13:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.687 13:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.687 13:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.687 13:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.687 13:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.687 13:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.687 13:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.063 00:10:57.063 13:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:57.063 13:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.063 13:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:57.653 13:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.653 13:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.653 13:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.653 13:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.653 13:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.653 13:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:57.653 { 00:10:57.653 "cntlid": 67, 00:10:57.653 "qid": 0, 00:10:57.653 "state": "enabled", 00:10:57.653 "thread": "nvmf_tgt_poll_group_000", 00:10:57.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:10:57.653 "listen_address": { 00:10:57.653 "trtype": "TCP", 00:10:57.653 "adrfam": "IPv4", 00:10:57.653 "traddr": "10.0.0.3", 00:10:57.653 "trsvcid": "4420" 00:10:57.653 }, 00:10:57.653 "peer_address": { 00:10:57.653 "trtype": "TCP", 00:10:57.653 "adrfam": "IPv4", 00:10:57.653 "traddr": "10.0.0.1", 00:10:57.653 "trsvcid": "51020" 00:10:57.653 }, 00:10:57.653 "auth": { 00:10:57.653 "state": "completed", 00:10:57.653 "digest": "sha384", 00:10:57.653 "dhgroup": "ffdhe3072" 00:10:57.653 } 00:10:57.653 } 00:10:57.653 ]' 00:10:57.653 13:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:57.653 13:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:57.653 13:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:57.915 13:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:57.915 13:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:57.915 13:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.915 13:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.915 13:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.482 13:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:10:58.482 13:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:11:00.414 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.414 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:00.414 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.414 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.414 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.414 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.414 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:00.414 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:00.984 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:00.984 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:00.984 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:00.984 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:00.984 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:00.984 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.984 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.984 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.984 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.984 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.984 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.984 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.984 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:02.358 00:11:02.358 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.358 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.358 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.617 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.617 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.617 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.617 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.875 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.875 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.875 { 00:11:02.875 "cntlid": 69, 00:11:02.875 "qid": 0, 00:11:02.875 "state": "enabled", 00:11:02.875 "thread": "nvmf_tgt_poll_group_000", 00:11:02.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:02.875 "listen_address": { 00:11:02.875 "trtype": "TCP", 00:11:02.875 "adrfam": "IPv4", 00:11:02.875 "traddr": "10.0.0.3", 00:11:02.875 "trsvcid": "4420" 00:11:02.875 }, 00:11:02.875 "peer_address": { 00:11:02.875 "trtype": "TCP", 00:11:02.875 "adrfam": "IPv4", 00:11:02.875 "traddr": "10.0.0.1", 00:11:02.875 "trsvcid": "51036" 00:11:02.875 }, 00:11:02.875 "auth": { 00:11:02.875 "state": "completed", 00:11:02.875 "digest": "sha384", 00:11:02.875 "dhgroup": "ffdhe3072" 00:11:02.875 } 00:11:02.875 } 00:11:02.875 ]' 00:11:02.875 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.875 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:02.875 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.875 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:02.875 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.133 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.133 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.133 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.699 13:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:11:03.699 13:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:11:05.627 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.627 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:05.627 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.627 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.627 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.627 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.627 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:05.627 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:06.192 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:06.192 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.192 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:06.192 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:06.192 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:06.192 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.192 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:11:06.192 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.192 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.192 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.192 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:06.192 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:06.192 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:07.126 00:11:07.126 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:07.126 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.126 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.732 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.732 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.732 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.732 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.732 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.732 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.732 { 00:11:07.732 "cntlid": 71, 00:11:07.732 "qid": 0, 00:11:07.732 "state": "enabled", 00:11:07.732 "thread": "nvmf_tgt_poll_group_000", 00:11:07.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:07.732 "listen_address": { 00:11:07.732 "trtype": "TCP", 00:11:07.732 "adrfam": "IPv4", 00:11:07.732 "traddr": "10.0.0.3", 00:11:07.732 "trsvcid": "4420" 00:11:07.732 }, 00:11:07.732 "peer_address": { 00:11:07.732 "trtype": "TCP", 00:11:07.732 "adrfam": "IPv4", 00:11:07.732 "traddr": "10.0.0.1", 00:11:07.732 "trsvcid": "50404" 00:11:07.732 }, 00:11:07.732 "auth": { 00:11:07.732 "state": "completed", 00:11:07.732 "digest": "sha384", 00:11:07.732 "dhgroup": "ffdhe3072" 00:11:07.732 } 00:11:07.732 } 00:11:07.732 ]' 00:11:07.732 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.990 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:07.990 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.990 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:07.990 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.248 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.248 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.248 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.814 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:11:08.814 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:11:10.720 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.720 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:10.720 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.720 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.720 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.720 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:10.978 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.978 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:10.978 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:11.236 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:11.236 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.236 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:11.493 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:11.493 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:11.493 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.494 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.494 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.494 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.494 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.494 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.494 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.494 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.427 00:11:12.427 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.427 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.427 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.992 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.992 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.992 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.992 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.992 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.992 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.992 { 00:11:12.992 "cntlid": 73, 00:11:12.992 "qid": 0, 00:11:12.992 "state": "enabled", 00:11:12.992 "thread": "nvmf_tgt_poll_group_000", 00:11:12.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:12.992 "listen_address": { 00:11:12.992 "trtype": "TCP", 00:11:12.992 "adrfam": "IPv4", 00:11:12.992 "traddr": "10.0.0.3", 00:11:12.992 "trsvcid": "4420" 00:11:12.992 }, 00:11:12.992 "peer_address": { 00:11:12.992 "trtype": "TCP", 00:11:12.992 "adrfam": "IPv4", 00:11:12.992 "traddr": "10.0.0.1", 00:11:12.992 "trsvcid": "50430" 00:11:12.992 }, 00:11:12.992 "auth": { 00:11:12.992 "state": "completed", 00:11:12.992 "digest": "sha384", 00:11:12.992 "dhgroup": "ffdhe4096" 00:11:12.992 } 00:11:12.992 } 00:11:12.992 ]' 00:11:12.992 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.251 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.251 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.251 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:13.251 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.251 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.251 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.251 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.822 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:11:13.822 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:11:15.741 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.741 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:15.741 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.741 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.741 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.741 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.741 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:15.741 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:16.311 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:16.311 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.311 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:16.311 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:16.311 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:16.311 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.311 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.311 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.311 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.311 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.311 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.311 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.311 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.277 00:11:17.277 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.277 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.277 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.845 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.845 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.845 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.845 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.845 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.845 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.845 { 00:11:17.845 "cntlid": 75, 00:11:17.845 "qid": 0, 00:11:17.845 "state": "enabled", 00:11:17.845 "thread": "nvmf_tgt_poll_group_000", 00:11:17.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:17.845 "listen_address": { 00:11:17.845 "trtype": "TCP", 00:11:17.845 "adrfam": "IPv4", 00:11:17.845 "traddr": "10.0.0.3", 00:11:17.845 "trsvcid": "4420" 00:11:17.845 }, 00:11:17.845 "peer_address": { 00:11:17.845 "trtype": "TCP", 00:11:17.845 "adrfam": "IPv4", 00:11:17.845 "traddr": "10.0.0.1", 00:11:17.845 "trsvcid": "34832" 00:11:17.845 }, 00:11:17.845 "auth": { 00:11:17.845 "state": "completed", 00:11:17.845 "digest": "sha384", 00:11:17.845 "dhgroup": "ffdhe4096" 00:11:17.845 } 00:11:17.845 } 00:11:17.845 ]' 00:11:17.845 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.845 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:17.845 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.103 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:18.103 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.103 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.103 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.103 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.670 13:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:11:18.670 13:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:11:20.606 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.606 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:20.606 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.606 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.606 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.606 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.606 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:20.606 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:21.211 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:21.211 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.211 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:21.211 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:21.211 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:21.211 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.211 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.211 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.211 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.211 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.211 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.211 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.211 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.146 00:11:22.146 13:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.146 13:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.146 13:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.713 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.713 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.713 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.713 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.713 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.713 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.713 { 00:11:22.713 "cntlid": 77, 00:11:22.713 "qid": 0, 00:11:22.713 "state": "enabled", 00:11:22.713 "thread": "nvmf_tgt_poll_group_000", 00:11:22.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:22.713 "listen_address": { 00:11:22.713 "trtype": "TCP", 00:11:22.713 "adrfam": "IPv4", 00:11:22.713 "traddr": "10.0.0.3", 00:11:22.713 "trsvcid": "4420" 00:11:22.713 }, 00:11:22.713 "peer_address": { 00:11:22.713 "trtype": "TCP", 00:11:22.713 "adrfam": "IPv4", 00:11:22.713 "traddr": "10.0.0.1", 00:11:22.713 "trsvcid": "34848" 00:11:22.713 }, 00:11:22.713 "auth": { 00:11:22.713 "state": "completed", 00:11:22.713 "digest": "sha384", 00:11:22.713 "dhgroup": "ffdhe4096" 00:11:22.713 } 00:11:22.713 } 00:11:22.713 ]' 00:11:22.713 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.972 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:22.972 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.972 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:22.972 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.972 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.972 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.972 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.538 13:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:11:23.538 13:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:11:25.439 13:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.439 13:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:25.439 13:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.439 13:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.439 13:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.439 13:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.440 13:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:25.440 13:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:25.698 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:25.698 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.698 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:25.698 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:25.698 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:25.698 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.698 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:11:25.698 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.698 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.698 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.698 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:25.698 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:25.698 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:26.263 00:11:26.263 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.263 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.263 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.830 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.830 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.830 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.830 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.830 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.830 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.830 { 00:11:26.830 "cntlid": 79, 00:11:26.830 "qid": 0, 00:11:26.830 "state": "enabled", 00:11:26.830 "thread": "nvmf_tgt_poll_group_000", 00:11:26.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:26.830 "listen_address": { 00:11:26.830 "trtype": "TCP", 00:11:26.830 "adrfam": "IPv4", 00:11:26.830 "traddr": "10.0.0.3", 00:11:26.830 "trsvcid": "4420" 00:11:26.830 }, 00:11:26.830 "peer_address": { 00:11:26.830 "trtype": "TCP", 00:11:26.830 "adrfam": "IPv4", 00:11:26.830 "traddr": "10.0.0.1", 00:11:26.830 "trsvcid": "33418" 00:11:26.830 }, 00:11:26.830 "auth": { 00:11:26.830 "state": "completed", 00:11:26.830 "digest": "sha384", 00:11:26.830 "dhgroup": "ffdhe4096" 00:11:26.830 } 00:11:26.830 } 00:11:26.830 ]' 00:11:26.830 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.830 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:26.830 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.089 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:27.089 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.089 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.089 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.089 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.348 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:11:27.349 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:11:28.285 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.285 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:28.285 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.285 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.285 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.285 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:28.285 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.285 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:28.285 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:28.852 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:28.852 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.852 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:28.852 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:28.852 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:28.852 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.852 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.852 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.852 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.852 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.852 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.852 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.852 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.419 00:11:29.419 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.419 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.420 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.678 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.678 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.678 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.678 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.678 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.678 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.678 { 00:11:29.678 "cntlid": 81, 00:11:29.678 "qid": 0, 00:11:29.678 "state": "enabled", 00:11:29.678 "thread": "nvmf_tgt_poll_group_000", 00:11:29.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:29.678 "listen_address": { 00:11:29.678 "trtype": "TCP", 00:11:29.678 "adrfam": "IPv4", 00:11:29.678 "traddr": "10.0.0.3", 00:11:29.678 "trsvcid": "4420" 00:11:29.678 }, 00:11:29.678 "peer_address": { 00:11:29.678 "trtype": "TCP", 00:11:29.678 "adrfam": "IPv4", 00:11:29.678 "traddr": "10.0.0.1", 00:11:29.678 "trsvcid": "33430" 00:11:29.678 }, 00:11:29.678 "auth": { 00:11:29.678 "state": "completed", 00:11:29.678 "digest": "sha384", 00:11:29.678 "dhgroup": "ffdhe6144" 00:11:29.678 } 00:11:29.678 } 00:11:29.678 ]' 00:11:29.678 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.936 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.936 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.936 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:29.936 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.936 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.936 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.936 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.193 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:11:30.194 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:11:31.130 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.130 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:31.130 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.130 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.130 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.130 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.130 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:31.130 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:31.696 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:31.696 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:31.696 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:31.696 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:31.696 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:31.696 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.696 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.696 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.696 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.696 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.696 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.696 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.696 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.266 00:11:32.266 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.266 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.266 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.526 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.526 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.526 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.526 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.526 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.526 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.526 { 00:11:32.526 "cntlid": 83, 00:11:32.526 "qid": 0, 00:11:32.526 "state": "enabled", 00:11:32.526 "thread": "nvmf_tgt_poll_group_000", 00:11:32.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:32.526 "listen_address": { 00:11:32.526 "trtype": "TCP", 00:11:32.526 "adrfam": "IPv4", 00:11:32.526 "traddr": "10.0.0.3", 00:11:32.526 "trsvcid": "4420" 00:11:32.526 }, 00:11:32.526 "peer_address": { 00:11:32.526 "trtype": "TCP", 00:11:32.526 "adrfam": "IPv4", 00:11:32.526 "traddr": "10.0.0.1", 00:11:32.526 "trsvcid": "33470" 00:11:32.526 }, 00:11:32.526 "auth": { 00:11:32.526 "state": "completed", 00:11:32.526 "digest": "sha384", 00:11:32.526 "dhgroup": "ffdhe6144" 00:11:32.526 } 00:11:32.526 } 00:11:32.526 ]' 00:11:32.526 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.785 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:32.785 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.785 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:32.785 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.785 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.785 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.785 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.042 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:11:33.042 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:11:33.976 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.976 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:33.976 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.976 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.976 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.976 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:33.976 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:33.976 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:34.235 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:34.235 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.235 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:34.235 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:34.235 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:34.235 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.235 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.235 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.235 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.235 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.235 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.235 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.235 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.800 00:11:34.800 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:34.800 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.800 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.058 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.058 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.058 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.058 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.317 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.317 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.317 { 00:11:35.317 "cntlid": 85, 00:11:35.317 "qid": 0, 00:11:35.317 "state": "enabled", 00:11:35.317 "thread": "nvmf_tgt_poll_group_000", 00:11:35.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:35.317 "listen_address": { 00:11:35.317 "trtype": "TCP", 00:11:35.317 "adrfam": "IPv4", 00:11:35.317 "traddr": "10.0.0.3", 00:11:35.317 "trsvcid": "4420" 00:11:35.317 }, 00:11:35.317 "peer_address": { 00:11:35.317 "trtype": "TCP", 00:11:35.317 "adrfam": "IPv4", 00:11:35.317 "traddr": "10.0.0.1", 00:11:35.317 "trsvcid": "33502" 00:11:35.317 }, 00:11:35.317 "auth": { 00:11:35.317 "state": "completed", 00:11:35.317 "digest": "sha384", 00:11:35.317 "dhgroup": "ffdhe6144" 00:11:35.317 } 00:11:35.317 } 00:11:35.317 ]' 00:11:35.317 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.317 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:35.317 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.317 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:35.317 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.317 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.317 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.317 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.884 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:11:35.884 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:11:36.819 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.819 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:36.819 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.819 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.819 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.819 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.819 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:36.819 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:37.078 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:37.078 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.078 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:37.078 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:37.078 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:37.078 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.078 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:11:37.078 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.078 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.078 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.078 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:37.078 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:37.078 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:37.641 00:11:37.641 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.641 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.641 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.899 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.899 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.899 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.899 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.899 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.899 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.899 { 00:11:37.899 "cntlid": 87, 00:11:37.899 "qid": 0, 00:11:37.899 "state": "enabled", 00:11:37.899 "thread": "nvmf_tgt_poll_group_000", 00:11:37.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:37.899 "listen_address": { 00:11:37.899 "trtype": "TCP", 00:11:37.899 "adrfam": "IPv4", 00:11:37.899 "traddr": "10.0.0.3", 00:11:37.899 "trsvcid": "4420" 00:11:37.899 }, 00:11:37.899 "peer_address": { 00:11:37.899 "trtype": "TCP", 00:11:37.899 "adrfam": "IPv4", 00:11:37.899 "traddr": "10.0.0.1", 00:11:37.899 "trsvcid": "59388" 00:11:37.899 }, 00:11:37.899 "auth": { 00:11:37.899 "state": "completed", 00:11:37.899 "digest": "sha384", 00:11:37.899 "dhgroup": "ffdhe6144" 00:11:37.899 } 00:11:37.899 } 00:11:37.899 ]' 00:11:37.899 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.899 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:37.899 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.899 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:37.899 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.899 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.899 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.899 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.465 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:11:38.465 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:11:39.031 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.031 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:39.031 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.031 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.031 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.031 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:39.031 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.031 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:39.031 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:39.289 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:39.289 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.289 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:39.289 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:39.289 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:39.289 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.289 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.289 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.289 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.289 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.289 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.289 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.289 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.225 00:11:40.225 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.225 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.225 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.484 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.484 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.484 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.484 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.484 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.484 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.484 { 00:11:40.484 "cntlid": 89, 00:11:40.484 "qid": 0, 00:11:40.484 "state": "enabled", 00:11:40.484 "thread": "nvmf_tgt_poll_group_000", 00:11:40.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:40.484 "listen_address": { 00:11:40.484 "trtype": "TCP", 00:11:40.484 "adrfam": "IPv4", 00:11:40.484 "traddr": "10.0.0.3", 00:11:40.484 "trsvcid": "4420" 00:11:40.484 }, 00:11:40.484 "peer_address": { 00:11:40.484 "trtype": "TCP", 00:11:40.484 "adrfam": "IPv4", 00:11:40.484 "traddr": "10.0.0.1", 00:11:40.484 "trsvcid": "59412" 00:11:40.484 }, 00:11:40.484 "auth": { 00:11:40.484 "state": "completed", 00:11:40.484 "digest": "sha384", 00:11:40.484 "dhgroup": "ffdhe8192" 00:11:40.484 } 00:11:40.484 } 00:11:40.484 ]' 00:11:40.484 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.484 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:40.484 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.742 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:40.742 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.742 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.742 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.742 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.309 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:11:41.309 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:11:42.243 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.243 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:42.243 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.243 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.243 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.243 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.243 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:42.243 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:42.502 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:42.502 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.502 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:42.502 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:42.502 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:42.502 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.502 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.502 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.502 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.502 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.502 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.502 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.502 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.436 00:11:43.436 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.436 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.436 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.694 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.694 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.694 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.694 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.694 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.694 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.694 { 00:11:43.694 "cntlid": 91, 00:11:43.694 "qid": 0, 00:11:43.694 "state": "enabled", 00:11:43.694 "thread": "nvmf_tgt_poll_group_000", 00:11:43.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:43.694 "listen_address": { 00:11:43.694 "trtype": "TCP", 00:11:43.694 "adrfam": "IPv4", 00:11:43.694 "traddr": "10.0.0.3", 00:11:43.694 "trsvcid": "4420" 00:11:43.694 }, 00:11:43.694 "peer_address": { 00:11:43.694 "trtype": "TCP", 00:11:43.694 "adrfam": "IPv4", 00:11:43.694 "traddr": "10.0.0.1", 00:11:43.694 "trsvcid": "59432" 00:11:43.694 }, 00:11:43.694 "auth": { 00:11:43.694 "state": "completed", 00:11:43.694 "digest": "sha384", 00:11:43.694 "dhgroup": "ffdhe8192" 00:11:43.694 } 00:11:43.694 } 00:11:43.694 ]' 00:11:43.694 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.953 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.953 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.953 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:43.953 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.953 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.953 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.953 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.519 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:11:44.519 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:11:45.085 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.085 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:45.085 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.085 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.085 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.085 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.085 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:45.085 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:45.653 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:45.653 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.653 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:45.653 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:45.653 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:45.653 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.653 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.653 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.653 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.653 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.653 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.653 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.653 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.220 00:11:46.220 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.220 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.220 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.478 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.478 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.478 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.478 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.478 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.478 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.478 { 00:11:46.478 "cntlid": 93, 00:11:46.478 "qid": 0, 00:11:46.478 "state": "enabled", 00:11:46.478 "thread": "nvmf_tgt_poll_group_000", 00:11:46.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:46.478 "listen_address": { 00:11:46.478 "trtype": "TCP", 00:11:46.478 "adrfam": "IPv4", 00:11:46.478 "traddr": "10.0.0.3", 00:11:46.478 "trsvcid": "4420" 00:11:46.478 }, 00:11:46.478 "peer_address": { 00:11:46.478 "trtype": "TCP", 00:11:46.478 "adrfam": "IPv4", 00:11:46.478 "traddr": "10.0.0.1", 00:11:46.478 "trsvcid": "59458" 00:11:46.478 }, 00:11:46.478 "auth": { 00:11:46.478 "state": "completed", 00:11:46.478 "digest": "sha384", 00:11:46.478 "dhgroup": "ffdhe8192" 00:11:46.478 } 00:11:46.478 } 00:11:46.478 ]' 00:11:46.478 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.736 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:46.736 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.736 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:46.736 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.736 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.736 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.736 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.994 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:11:46.994 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:11:47.928 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.928 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:47.928 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.928 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.928 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.928 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.928 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:47.928 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:48.186 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:48.186 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.186 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:48.186 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:48.186 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:48.186 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.186 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:11:48.186 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.186 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.186 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.186 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:48.186 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:48.186 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:49.121 00:11:49.121 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.121 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:49.121 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.379 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.379 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.379 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.379 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.637 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.637 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.637 { 00:11:49.637 "cntlid": 95, 00:11:49.637 "qid": 0, 00:11:49.637 "state": "enabled", 00:11:49.637 "thread": "nvmf_tgt_poll_group_000", 00:11:49.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:49.638 "listen_address": { 00:11:49.638 "trtype": "TCP", 00:11:49.638 "adrfam": "IPv4", 00:11:49.638 "traddr": "10.0.0.3", 00:11:49.638 "trsvcid": "4420" 00:11:49.638 }, 00:11:49.638 "peer_address": { 00:11:49.638 "trtype": "TCP", 00:11:49.638 "adrfam": "IPv4", 00:11:49.638 "traddr": "10.0.0.1", 00:11:49.638 "trsvcid": "56418" 00:11:49.638 }, 00:11:49.638 "auth": { 00:11:49.638 "state": "completed", 00:11:49.638 "digest": "sha384", 00:11:49.638 "dhgroup": "ffdhe8192" 00:11:49.638 } 00:11:49.638 } 00:11:49.638 ]' 00:11:49.638 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.638 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.638 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.638 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:49.638 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.638 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.638 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.638 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.204 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:11:50.204 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:11:50.770 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.770 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:50.770 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.770 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.770 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.770 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:50.770 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:50.770 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.770 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:50.770 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:51.335 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:51.335 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.335 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:51.335 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:51.335 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:51.335 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.335 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.335 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.335 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.335 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.335 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.335 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.335 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.899 00:11:51.899 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.899 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.899 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.157 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.157 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.157 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.157 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.157 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.157 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.157 { 00:11:52.157 "cntlid": 97, 00:11:52.157 "qid": 0, 00:11:52.157 "state": "enabled", 00:11:52.157 "thread": "nvmf_tgt_poll_group_000", 00:11:52.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:52.158 "listen_address": { 00:11:52.158 "trtype": "TCP", 00:11:52.158 "adrfam": "IPv4", 00:11:52.158 "traddr": "10.0.0.3", 00:11:52.158 "trsvcid": "4420" 00:11:52.158 }, 00:11:52.158 "peer_address": { 00:11:52.158 "trtype": "TCP", 00:11:52.158 "adrfam": "IPv4", 00:11:52.158 "traddr": "10.0.0.1", 00:11:52.158 "trsvcid": "56432" 00:11:52.158 }, 00:11:52.158 "auth": { 00:11:52.158 "state": "completed", 00:11:52.158 "digest": "sha512", 00:11:52.158 "dhgroup": "null" 00:11:52.158 } 00:11:52.158 } 00:11:52.158 ]' 00:11:52.158 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.158 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:52.158 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.415 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:52.415 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.415 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.415 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.415 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.673 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:11:52.673 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:11:53.606 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.606 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:53.606 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.606 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.606 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.606 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.606 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:53.606 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:53.864 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:53.864 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.864 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:53.865 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:53.865 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:53.865 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.865 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.865 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.865 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.865 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.865 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.865 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.865 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.123 00:11:54.123 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.123 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.123 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.381 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.381 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.381 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.381 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.640 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.640 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.640 { 00:11:54.640 "cntlid": 99, 00:11:54.640 "qid": 0, 00:11:54.640 "state": "enabled", 00:11:54.640 "thread": "nvmf_tgt_poll_group_000", 00:11:54.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:54.640 "listen_address": { 00:11:54.640 "trtype": "TCP", 00:11:54.640 "adrfam": "IPv4", 00:11:54.640 "traddr": "10.0.0.3", 00:11:54.640 "trsvcid": "4420" 00:11:54.640 }, 00:11:54.640 "peer_address": { 00:11:54.640 "trtype": "TCP", 00:11:54.640 "adrfam": "IPv4", 00:11:54.640 "traddr": "10.0.0.1", 00:11:54.640 "trsvcid": "56450" 00:11:54.640 }, 00:11:54.640 "auth": { 00:11:54.640 "state": "completed", 00:11:54.640 "digest": "sha512", 00:11:54.640 "dhgroup": "null" 00:11:54.640 } 00:11:54.640 } 00:11:54.640 ]' 00:11:54.640 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.640 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:54.640 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.640 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:54.640 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.640 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.640 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.640 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.899 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:11:54.899 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:11:55.831 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.831 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:55.831 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.831 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.831 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.831 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.831 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:55.832 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:56.089 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:56.089 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.089 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:56.090 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:56.090 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:56.090 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.090 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.090 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.090 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.090 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.090 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.090 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.090 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.653 00:11:56.653 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.653 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.653 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.911 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.911 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.911 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.911 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.911 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.911 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.911 { 00:11:56.911 "cntlid": 101, 00:11:56.911 "qid": 0, 00:11:56.911 "state": "enabled", 00:11:56.911 "thread": "nvmf_tgt_poll_group_000", 00:11:56.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:56.911 "listen_address": { 00:11:56.911 "trtype": "TCP", 00:11:56.911 "adrfam": "IPv4", 00:11:56.911 "traddr": "10.0.0.3", 00:11:56.911 "trsvcid": "4420" 00:11:56.911 }, 00:11:56.911 "peer_address": { 00:11:56.911 "trtype": "TCP", 00:11:56.911 "adrfam": "IPv4", 00:11:56.911 "traddr": "10.0.0.1", 00:11:56.911 "trsvcid": "35246" 00:11:56.911 }, 00:11:56.911 "auth": { 00:11:56.911 "state": "completed", 00:11:56.911 "digest": "sha512", 00:11:56.911 "dhgroup": "null" 00:11:56.911 } 00:11:56.911 } 00:11:56.911 ]' 00:11:56.911 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.911 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:56.911 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.169 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:57.169 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.169 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.169 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.169 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.493 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:11:57.493 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:11:58.424 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.424 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:11:58.424 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.424 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.424 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.424 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.424 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:58.424 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:58.682 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:58.682 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.682 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:58.682 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:58.682 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:58.682 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.682 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:11:58.682 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.682 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.940 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.940 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:58.940 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.940 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.565 00:11:59.565 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.565 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.565 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.824 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.824 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.824 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.824 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.824 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.824 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.824 { 00:11:59.824 "cntlid": 103, 00:11:59.824 "qid": 0, 00:11:59.824 "state": "enabled", 00:11:59.824 "thread": "nvmf_tgt_poll_group_000", 00:11:59.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:11:59.824 "listen_address": { 00:11:59.824 "trtype": "TCP", 00:11:59.824 "adrfam": "IPv4", 00:11:59.824 "traddr": "10.0.0.3", 00:11:59.824 "trsvcid": "4420" 00:11:59.824 }, 00:11:59.824 "peer_address": { 00:11:59.824 "trtype": "TCP", 00:11:59.824 "adrfam": "IPv4", 00:11:59.824 "traddr": "10.0.0.1", 00:11:59.824 "trsvcid": "35276" 00:11:59.824 }, 00:11:59.824 "auth": { 00:11:59.824 "state": "completed", 00:11:59.824 "digest": "sha512", 00:11:59.824 "dhgroup": "null" 00:11:59.824 } 00:11:59.824 } 00:11:59.824 ]' 00:11:59.824 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.082 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:00.082 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.082 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:00.082 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.082 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.082 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.082 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.649 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:12:00.649 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:12:02.023 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.023 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:02.023 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.023 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.023 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.023 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:02.023 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.023 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:02.023 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:02.281 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:02.281 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.281 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:02.281 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:02.281 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:02.281 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.281 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.281 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.281 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.281 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.281 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.281 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.281 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.927 00:12:02.927 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.927 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.927 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.186 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.186 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.186 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.186 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.186 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.186 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.186 { 00:12:03.186 "cntlid": 105, 00:12:03.186 "qid": 0, 00:12:03.186 "state": "enabled", 00:12:03.186 "thread": "nvmf_tgt_poll_group_000", 00:12:03.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:03.186 "listen_address": { 00:12:03.186 "trtype": "TCP", 00:12:03.186 "adrfam": "IPv4", 00:12:03.186 "traddr": "10.0.0.3", 00:12:03.186 "trsvcid": "4420" 00:12:03.186 }, 00:12:03.186 "peer_address": { 00:12:03.186 "trtype": "TCP", 00:12:03.186 "adrfam": "IPv4", 00:12:03.186 "traddr": "10.0.0.1", 00:12:03.186 "trsvcid": "35312" 00:12:03.186 }, 00:12:03.186 "auth": { 00:12:03.186 "state": "completed", 00:12:03.186 "digest": "sha512", 00:12:03.186 "dhgroup": "ffdhe2048" 00:12:03.186 } 00:12:03.186 } 00:12:03.186 ]' 00:12:03.186 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.444 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.444 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.444 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:03.444 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.444 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.444 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.444 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.010 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:12:04.010 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:12:04.945 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.945 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:04.945 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.945 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.945 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.945 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.945 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:04.945 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:05.203 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:05.203 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.203 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:05.203 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:05.203 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:05.203 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.203 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.203 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.203 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.203 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.203 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.203 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.203 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.800 00:12:05.800 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.800 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.800 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.368 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.368 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.368 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.368 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.368 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.368 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.368 { 00:12:06.368 "cntlid": 107, 00:12:06.368 "qid": 0, 00:12:06.368 "state": "enabled", 00:12:06.368 "thread": "nvmf_tgt_poll_group_000", 00:12:06.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:06.368 "listen_address": { 00:12:06.368 "trtype": "TCP", 00:12:06.368 "adrfam": "IPv4", 00:12:06.368 "traddr": "10.0.0.3", 00:12:06.368 "trsvcid": "4420" 00:12:06.368 }, 00:12:06.368 "peer_address": { 00:12:06.368 "trtype": "TCP", 00:12:06.368 "adrfam": "IPv4", 00:12:06.368 "traddr": "10.0.0.1", 00:12:06.368 "trsvcid": "35342" 00:12:06.368 }, 00:12:06.368 "auth": { 00:12:06.368 "state": "completed", 00:12:06.368 "digest": "sha512", 00:12:06.368 "dhgroup": "ffdhe2048" 00:12:06.368 } 00:12:06.368 } 00:12:06.368 ]' 00:12:06.368 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.626 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:06.626 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.626 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:06.626 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.904 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.904 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.904 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.479 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:12:07.479 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:12:09.378 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.378 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:09.378 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.378 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.378 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.378 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.378 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:09.378 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:09.635 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:09.635 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.635 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:09.635 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:09.635 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:09.635 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.635 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.635 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.635 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.635 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.635 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.635 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.635 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.568 00:12:10.568 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.568 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.568 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.828 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.828 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.828 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.828 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.828 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.828 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.828 { 00:12:10.828 "cntlid": 109, 00:12:10.828 "qid": 0, 00:12:10.828 "state": "enabled", 00:12:10.828 "thread": "nvmf_tgt_poll_group_000", 00:12:10.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:10.828 "listen_address": { 00:12:10.828 "trtype": "TCP", 00:12:10.828 "adrfam": "IPv4", 00:12:10.828 "traddr": "10.0.0.3", 00:12:10.828 "trsvcid": "4420" 00:12:10.828 }, 00:12:10.828 "peer_address": { 00:12:10.828 "trtype": "TCP", 00:12:10.828 "adrfam": "IPv4", 00:12:10.828 "traddr": "10.0.0.1", 00:12:10.828 "trsvcid": "34232" 00:12:10.828 }, 00:12:10.828 "auth": { 00:12:10.828 "state": "completed", 00:12:10.828 "digest": "sha512", 00:12:10.828 "dhgroup": "ffdhe2048" 00:12:10.828 } 00:12:10.828 } 00:12:10.828 ]' 00:12:10.828 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.828 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.828 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.086 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:11.086 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.087 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.087 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.087 13:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.653 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:12:11.653 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:12:13.053 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.053 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:13.053 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.053 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.053 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.053 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.053 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:13.053 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:13.311 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:13.311 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.311 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:13.311 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:13.311 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:13.311 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.311 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:12:13.312 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.312 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.312 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.312 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:13.312 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:13.312 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:13.570 00:12:13.570 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.570 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.570 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.135 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.135 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.135 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.135 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.135 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.135 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.135 { 00:12:14.135 "cntlid": 111, 00:12:14.135 "qid": 0, 00:12:14.135 "state": "enabled", 00:12:14.135 "thread": "nvmf_tgt_poll_group_000", 00:12:14.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:14.135 "listen_address": { 00:12:14.135 "trtype": "TCP", 00:12:14.135 "adrfam": "IPv4", 00:12:14.135 "traddr": "10.0.0.3", 00:12:14.135 "trsvcid": "4420" 00:12:14.135 }, 00:12:14.135 "peer_address": { 00:12:14.135 "trtype": "TCP", 00:12:14.135 "adrfam": "IPv4", 00:12:14.135 "traddr": "10.0.0.1", 00:12:14.135 "trsvcid": "34272" 00:12:14.135 }, 00:12:14.135 "auth": { 00:12:14.135 "state": "completed", 00:12:14.135 "digest": "sha512", 00:12:14.135 "dhgroup": "ffdhe2048" 00:12:14.135 } 00:12:14.135 } 00:12:14.135 ]' 00:12:14.135 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.135 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.135 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.135 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:14.135 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.393 13:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.393 13:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.393 13:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.651 13:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:12:14.651 13:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:12:15.584 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.584 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:15.584 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.584 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.584 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.584 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:15.584 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.584 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:15.584 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:16.152 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:12:16.152 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.152 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:16.152 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:16.152 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:16.152 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.152 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.152 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.152 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.152 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.152 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.152 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.152 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.718 00:12:16.718 13:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.718 13:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.719 13:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.296 13:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.296 13:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.296 13:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.296 13:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.296 13:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.296 13:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.296 { 00:12:17.296 "cntlid": 113, 00:12:17.296 "qid": 0, 00:12:17.296 "state": "enabled", 00:12:17.296 "thread": "nvmf_tgt_poll_group_000", 00:12:17.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:17.296 "listen_address": { 00:12:17.296 "trtype": "TCP", 00:12:17.296 "adrfam": "IPv4", 00:12:17.296 "traddr": "10.0.0.3", 00:12:17.296 "trsvcid": "4420" 00:12:17.296 }, 00:12:17.296 "peer_address": { 00:12:17.296 "trtype": "TCP", 00:12:17.296 "adrfam": "IPv4", 00:12:17.296 "traddr": "10.0.0.1", 00:12:17.296 "trsvcid": "51984" 00:12:17.296 }, 00:12:17.296 "auth": { 00:12:17.296 "state": "completed", 00:12:17.296 "digest": "sha512", 00:12:17.296 "dhgroup": "ffdhe3072" 00:12:17.296 } 00:12:17.296 } 00:12:17.296 ]' 00:12:17.296 13:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.296 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:17.296 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.296 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:17.296 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.558 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.558 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.558 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.124 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:12:18.124 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:12:20.690 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.690 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:20.690 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.690 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.690 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.690 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.690 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:20.690 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:21.257 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:12:21.257 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.257 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:21.257 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:21.257 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:21.257 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.257 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.257 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.257 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.257 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.257 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.257 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.257 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.871 00:12:21.871 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.871 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.871 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.438 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.438 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.438 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.438 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.438 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.438 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.438 { 00:12:22.438 "cntlid": 115, 00:12:22.438 "qid": 0, 00:12:22.438 "state": "enabled", 00:12:22.438 "thread": "nvmf_tgt_poll_group_000", 00:12:22.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:22.438 "listen_address": { 00:12:22.438 "trtype": "TCP", 00:12:22.438 "adrfam": "IPv4", 00:12:22.438 "traddr": "10.0.0.3", 00:12:22.438 "trsvcid": "4420" 00:12:22.438 }, 00:12:22.438 "peer_address": { 00:12:22.438 "trtype": "TCP", 00:12:22.438 "adrfam": "IPv4", 00:12:22.438 "traddr": "10.0.0.1", 00:12:22.438 "trsvcid": "52010" 00:12:22.438 }, 00:12:22.438 "auth": { 00:12:22.438 "state": "completed", 00:12:22.438 "digest": "sha512", 00:12:22.438 "dhgroup": "ffdhe3072" 00:12:22.438 } 00:12:22.438 } 00:12:22.438 ]' 00:12:22.438 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.438 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:22.438 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.438 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:22.438 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.697 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.697 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.697 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.266 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:12:23.266 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:12:24.204 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.204 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:24.204 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.204 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.204 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.204 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.204 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:24.204 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:24.771 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:12:24.771 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.771 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:24.771 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:24.771 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:24.771 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.771 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.771 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.771 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.771 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.771 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.771 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.771 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.378 00:12:25.378 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.378 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.378 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.636 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.636 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.636 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.636 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.636 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.636 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.636 { 00:12:25.636 "cntlid": 117, 00:12:25.636 "qid": 0, 00:12:25.636 "state": "enabled", 00:12:25.636 "thread": "nvmf_tgt_poll_group_000", 00:12:25.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:25.636 "listen_address": { 00:12:25.636 "trtype": "TCP", 00:12:25.636 "adrfam": "IPv4", 00:12:25.636 "traddr": "10.0.0.3", 00:12:25.636 "trsvcid": "4420" 00:12:25.636 }, 00:12:25.636 "peer_address": { 00:12:25.636 "trtype": "TCP", 00:12:25.636 "adrfam": "IPv4", 00:12:25.636 "traddr": "10.0.0.1", 00:12:25.636 "trsvcid": "52040" 00:12:25.636 }, 00:12:25.636 "auth": { 00:12:25.636 "state": "completed", 00:12:25.636 "digest": "sha512", 00:12:25.636 "dhgroup": "ffdhe3072" 00:12:25.636 } 00:12:25.636 } 00:12:25.636 ]' 00:12:25.636 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.895 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.895 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.895 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:25.895 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.895 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.895 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.895 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.463 13:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:12:26.463 13:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:12:28.363 13:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.363 13:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:28.363 13:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.363 13:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.363 13:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.363 13:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.363 13:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:28.363 13:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:28.620 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:12:28.621 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.621 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:28.621 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:28.621 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:28.621 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.621 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:12:28.621 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.621 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.879 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.879 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:28.879 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.879 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:29.138 00:12:29.396 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.396 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.396 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.655 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.655 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.655 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.655 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.655 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.655 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.655 { 00:12:29.655 "cntlid": 119, 00:12:29.655 "qid": 0, 00:12:29.655 "state": "enabled", 00:12:29.655 "thread": "nvmf_tgt_poll_group_000", 00:12:29.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:29.655 "listen_address": { 00:12:29.655 "trtype": "TCP", 00:12:29.655 "adrfam": "IPv4", 00:12:29.655 "traddr": "10.0.0.3", 00:12:29.655 "trsvcid": "4420" 00:12:29.655 }, 00:12:29.655 "peer_address": { 00:12:29.655 "trtype": "TCP", 00:12:29.655 "adrfam": "IPv4", 00:12:29.655 "traddr": "10.0.0.1", 00:12:29.655 "trsvcid": "45064" 00:12:29.655 }, 00:12:29.655 "auth": { 00:12:29.655 "state": "completed", 00:12:29.655 "digest": "sha512", 00:12:29.655 "dhgroup": "ffdhe3072" 00:12:29.655 } 00:12:29.655 } 00:12:29.655 ]' 00:12:29.914 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.914 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.914 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.914 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:29.914 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.914 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.914 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.914 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.172 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:12:30.172 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:12:31.107 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.107 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:31.107 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.107 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.107 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.107 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:31.107 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.107 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:31.107 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:31.366 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:31.366 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:31.366 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:31.366 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:31.366 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:31.366 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.366 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.366 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.366 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.366 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.366 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.366 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.366 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.957 00:12:31.957 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.957 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.957 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.524 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.524 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.524 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.524 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.524 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.524 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.524 { 00:12:32.524 "cntlid": 121, 00:12:32.524 "qid": 0, 00:12:32.524 "state": "enabled", 00:12:32.524 "thread": "nvmf_tgt_poll_group_000", 00:12:32.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:32.524 "listen_address": { 00:12:32.524 "trtype": "TCP", 00:12:32.524 "adrfam": "IPv4", 00:12:32.524 "traddr": "10.0.0.3", 00:12:32.524 "trsvcid": "4420" 00:12:32.524 }, 00:12:32.524 "peer_address": { 00:12:32.524 "trtype": "TCP", 00:12:32.524 "adrfam": "IPv4", 00:12:32.524 "traddr": "10.0.0.1", 00:12:32.524 "trsvcid": "45102" 00:12:32.524 }, 00:12:32.524 "auth": { 00:12:32.524 "state": "completed", 00:12:32.524 "digest": "sha512", 00:12:32.524 "dhgroup": "ffdhe4096" 00:12:32.524 } 00:12:32.524 } 00:12:32.524 ]' 00:12:32.524 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.524 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:32.524 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.524 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:32.524 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.524 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.524 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.524 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.090 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:12:33.090 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:12:34.027 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.027 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:34.027 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.027 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.027 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.027 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.027 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:34.027 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:34.595 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:34.595 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.595 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:34.595 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:34.595 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:34.595 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.595 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.595 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.595 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.595 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.595 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.595 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.595 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.853 00:12:34.853 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.853 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.853 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.111 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.111 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.111 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.111 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.111 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.111 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.111 { 00:12:35.111 "cntlid": 123, 00:12:35.111 "qid": 0, 00:12:35.111 "state": "enabled", 00:12:35.111 "thread": "nvmf_tgt_poll_group_000", 00:12:35.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:35.111 "listen_address": { 00:12:35.111 "trtype": "TCP", 00:12:35.111 "adrfam": "IPv4", 00:12:35.111 "traddr": "10.0.0.3", 00:12:35.111 "trsvcid": "4420" 00:12:35.111 }, 00:12:35.111 "peer_address": { 00:12:35.111 "trtype": "TCP", 00:12:35.111 "adrfam": "IPv4", 00:12:35.111 "traddr": "10.0.0.1", 00:12:35.111 "trsvcid": "45132" 00:12:35.111 }, 00:12:35.111 "auth": { 00:12:35.111 "state": "completed", 00:12:35.111 "digest": "sha512", 00:12:35.111 "dhgroup": "ffdhe4096" 00:12:35.111 } 00:12:35.111 } 00:12:35.111 ]' 00:12:35.111 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.369 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.369 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.369 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:35.369 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.369 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.369 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.369 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.691 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:12:35.691 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:12:36.627 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.627 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:36.627 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.627 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.627 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.627 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.627 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:36.627 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:36.885 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:36.885 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.885 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:36.885 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:36.885 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:36.885 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.885 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.885 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.885 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.886 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.886 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.886 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.886 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.451 00:12:37.451 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.451 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.451 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.710 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.710 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.710 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.710 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.710 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.710 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.710 { 00:12:37.710 "cntlid": 125, 00:12:37.710 "qid": 0, 00:12:37.710 "state": "enabled", 00:12:37.710 "thread": "nvmf_tgt_poll_group_000", 00:12:37.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:37.710 "listen_address": { 00:12:37.710 "trtype": "TCP", 00:12:37.710 "adrfam": "IPv4", 00:12:37.710 "traddr": "10.0.0.3", 00:12:37.710 "trsvcid": "4420" 00:12:37.710 }, 00:12:37.710 "peer_address": { 00:12:37.710 "trtype": "TCP", 00:12:37.710 "adrfam": "IPv4", 00:12:37.710 "traddr": "10.0.0.1", 00:12:37.710 "trsvcid": "46126" 00:12:37.710 }, 00:12:37.710 "auth": { 00:12:37.710 "state": "completed", 00:12:37.710 "digest": "sha512", 00:12:37.710 "dhgroup": "ffdhe4096" 00:12:37.710 } 00:12:37.710 } 00:12:37.710 ]' 00:12:37.710 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.711 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:37.711 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.711 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:37.711 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.969 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.969 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.969 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.227 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:12:38.227 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:12:39.161 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.161 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:39.161 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.161 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.161 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.161 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.161 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:39.161 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:39.475 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:39.475 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.475 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:39.475 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:39.475 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:39.475 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.475 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:12:39.475 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.475 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.475 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.475 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:39.475 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:39.475 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:40.041 00:12:40.041 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.041 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.041 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.606 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.606 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.606 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.606 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.606 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.606 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.606 { 00:12:40.606 "cntlid": 127, 00:12:40.606 "qid": 0, 00:12:40.606 "state": "enabled", 00:12:40.606 "thread": "nvmf_tgt_poll_group_000", 00:12:40.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:40.606 "listen_address": { 00:12:40.606 "trtype": "TCP", 00:12:40.606 "adrfam": "IPv4", 00:12:40.606 "traddr": "10.0.0.3", 00:12:40.606 "trsvcid": "4420" 00:12:40.606 }, 00:12:40.606 "peer_address": { 00:12:40.606 "trtype": "TCP", 00:12:40.606 "adrfam": "IPv4", 00:12:40.606 "traddr": "10.0.0.1", 00:12:40.606 "trsvcid": "46158" 00:12:40.606 }, 00:12:40.606 "auth": { 00:12:40.606 "state": "completed", 00:12:40.606 "digest": "sha512", 00:12:40.606 "dhgroup": "ffdhe4096" 00:12:40.606 } 00:12:40.606 } 00:12:40.606 ]' 00:12:40.606 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.606 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.606 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.606 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:40.606 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.606 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.606 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.606 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.170 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:12:41.171 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:12:42.102 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.102 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:42.102 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.102 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.102 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.102 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:42.102 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.102 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:42.102 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:42.668 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:42.668 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.668 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:42.668 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:42.668 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:42.668 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.668 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.668 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.668 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.668 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.668 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.668 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.668 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.233 00:12:43.233 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.233 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.233 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.511 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.511 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.511 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.511 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.511 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.511 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.511 { 00:12:43.511 "cntlid": 129, 00:12:43.511 "qid": 0, 00:12:43.511 "state": "enabled", 00:12:43.511 "thread": "nvmf_tgt_poll_group_000", 00:12:43.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:43.511 "listen_address": { 00:12:43.511 "trtype": "TCP", 00:12:43.511 "adrfam": "IPv4", 00:12:43.511 "traddr": "10.0.0.3", 00:12:43.511 "trsvcid": "4420" 00:12:43.511 }, 00:12:43.511 "peer_address": { 00:12:43.511 "trtype": "TCP", 00:12:43.511 "adrfam": "IPv4", 00:12:43.511 "traddr": "10.0.0.1", 00:12:43.511 "trsvcid": "46188" 00:12:43.511 }, 00:12:43.511 "auth": { 00:12:43.511 "state": "completed", 00:12:43.511 "digest": "sha512", 00:12:43.511 "dhgroup": "ffdhe6144" 00:12:43.511 } 00:12:43.511 } 00:12:43.511 ]' 00:12:43.511 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.511 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:43.511 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.769 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:43.769 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.769 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.769 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.769 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.335 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:12:44.335 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:12:45.269 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.269 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:45.269 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.269 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.269 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.269 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.269 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:45.269 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:45.528 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:45.528 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.528 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:45.528 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:45.528 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:45.528 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.528 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.528 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.528 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.528 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.528 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.528 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.528 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.457 00:12:46.457 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.457 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.457 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.714 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.714 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.714 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.714 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.714 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.714 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.714 { 00:12:46.714 "cntlid": 131, 00:12:46.714 "qid": 0, 00:12:46.714 "state": "enabled", 00:12:46.714 "thread": "nvmf_tgt_poll_group_000", 00:12:46.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:46.714 "listen_address": { 00:12:46.714 "trtype": "TCP", 00:12:46.714 "adrfam": "IPv4", 00:12:46.714 "traddr": "10.0.0.3", 00:12:46.714 "trsvcid": "4420" 00:12:46.714 }, 00:12:46.714 "peer_address": { 00:12:46.714 "trtype": "TCP", 00:12:46.714 "adrfam": "IPv4", 00:12:46.714 "traddr": "10.0.0.1", 00:12:46.714 "trsvcid": "45806" 00:12:46.714 }, 00:12:46.714 "auth": { 00:12:46.714 "state": "completed", 00:12:46.714 "digest": "sha512", 00:12:46.714 "dhgroup": "ffdhe6144" 00:12:46.714 } 00:12:46.714 } 00:12:46.714 ]' 00:12:46.714 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.714 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:46.714 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.714 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:46.714 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.714 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.714 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.714 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.283 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:12:47.283 13:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:12:47.848 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.849 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:47.849 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.849 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.849 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.849 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.849 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:47.849 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:48.414 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:48.414 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.414 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:48.414 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:48.414 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:48.414 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.414 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.414 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.414 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.414 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.414 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.414 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.414 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.980 00:12:48.980 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.980 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.980 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.238 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.238 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.238 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.239 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.239 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.239 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.239 { 00:12:49.239 "cntlid": 133, 00:12:49.239 "qid": 0, 00:12:49.239 "state": "enabled", 00:12:49.239 "thread": "nvmf_tgt_poll_group_000", 00:12:49.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:49.239 "listen_address": { 00:12:49.239 "trtype": "TCP", 00:12:49.239 "adrfam": "IPv4", 00:12:49.239 "traddr": "10.0.0.3", 00:12:49.239 "trsvcid": "4420" 00:12:49.239 }, 00:12:49.239 "peer_address": { 00:12:49.239 "trtype": "TCP", 00:12:49.239 "adrfam": "IPv4", 00:12:49.239 "traddr": "10.0.0.1", 00:12:49.239 "trsvcid": "45840" 00:12:49.239 }, 00:12:49.239 "auth": { 00:12:49.239 "state": "completed", 00:12:49.239 "digest": "sha512", 00:12:49.239 "dhgroup": "ffdhe6144" 00:12:49.239 } 00:12:49.239 } 00:12:49.239 ]' 00:12:49.239 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.239 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.239 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.239 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:49.239 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.239 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.239 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.239 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.497 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:12:49.497 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:12:50.439 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.439 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:50.439 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.439 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.439 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.439 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.439 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:50.439 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:50.697 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:50.697 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.697 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:50.697 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:50.697 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:50.697 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.697 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:12:50.697 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.697 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.697 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.697 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:50.697 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.697 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:51.264 00:12:51.264 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.264 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.264 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.523 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.523 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.523 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.523 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.523 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.523 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.523 { 00:12:51.523 "cntlid": 135, 00:12:51.523 "qid": 0, 00:12:51.523 "state": "enabled", 00:12:51.523 "thread": "nvmf_tgt_poll_group_000", 00:12:51.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:51.523 "listen_address": { 00:12:51.523 "trtype": "TCP", 00:12:51.523 "adrfam": "IPv4", 00:12:51.523 "traddr": "10.0.0.3", 00:12:51.523 "trsvcid": "4420" 00:12:51.523 }, 00:12:51.523 "peer_address": { 00:12:51.523 "trtype": "TCP", 00:12:51.523 "adrfam": "IPv4", 00:12:51.523 "traddr": "10.0.0.1", 00:12:51.523 "trsvcid": "45870" 00:12:51.523 }, 00:12:51.523 "auth": { 00:12:51.523 "state": "completed", 00:12:51.523 "digest": "sha512", 00:12:51.523 "dhgroup": "ffdhe6144" 00:12:51.523 } 00:12:51.523 } 00:12:51.523 ]' 00:12:51.523 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.523 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.523 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.523 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:51.523 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.523 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.523 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.523 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.090 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:12:52.090 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:12:52.656 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.656 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:52.656 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.656 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.656 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.656 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:52.656 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.656 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:52.656 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:52.915 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:52.915 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.915 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:52.915 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:52.915 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:52.915 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.915 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.915 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.915 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.915 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.915 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.915 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.915 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.853 00:12:53.853 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.853 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.853 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.112 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.112 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.112 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.112 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.112 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.112 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.112 { 00:12:54.112 "cntlid": 137, 00:12:54.112 "qid": 0, 00:12:54.112 "state": "enabled", 00:12:54.112 "thread": "nvmf_tgt_poll_group_000", 00:12:54.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:54.112 "listen_address": { 00:12:54.112 "trtype": "TCP", 00:12:54.112 "adrfam": "IPv4", 00:12:54.112 "traddr": "10.0.0.3", 00:12:54.112 "trsvcid": "4420" 00:12:54.112 }, 00:12:54.112 "peer_address": { 00:12:54.112 "trtype": "TCP", 00:12:54.112 "adrfam": "IPv4", 00:12:54.112 "traddr": "10.0.0.1", 00:12:54.112 "trsvcid": "45892" 00:12:54.112 }, 00:12:54.112 "auth": { 00:12:54.112 "state": "completed", 00:12:54.112 "digest": "sha512", 00:12:54.112 "dhgroup": "ffdhe8192" 00:12:54.112 } 00:12:54.112 } 00:12:54.112 ]' 00:12:54.112 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.112 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:54.112 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.112 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:54.112 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.112 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.112 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.112 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.413 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:12:54.413 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:12:55.375 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.376 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:55.376 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.376 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.376 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.376 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.376 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:55.376 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:55.635 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:55.635 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.635 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:55.635 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:55.635 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:55.635 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.635 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.635 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.635 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.635 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.635 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.635 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.635 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.199 00:12:56.199 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.199 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.199 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.457 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.457 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.457 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.457 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.457 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.457 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.457 { 00:12:56.457 "cntlid": 139, 00:12:56.457 "qid": 0, 00:12:56.457 "state": "enabled", 00:12:56.457 "thread": "nvmf_tgt_poll_group_000", 00:12:56.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:56.457 "listen_address": { 00:12:56.457 "trtype": "TCP", 00:12:56.457 "adrfam": "IPv4", 00:12:56.457 "traddr": "10.0.0.3", 00:12:56.457 "trsvcid": "4420" 00:12:56.457 }, 00:12:56.457 "peer_address": { 00:12:56.457 "trtype": "TCP", 00:12:56.457 "adrfam": "IPv4", 00:12:56.457 "traddr": "10.0.0.1", 00:12:56.457 "trsvcid": "38576" 00:12:56.457 }, 00:12:56.457 "auth": { 00:12:56.457 "state": "completed", 00:12:56.457 "digest": "sha512", 00:12:56.457 "dhgroup": "ffdhe8192" 00:12:56.457 } 00:12:56.457 } 00:12:56.457 ]' 00:12:56.457 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.715 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:56.715 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.715 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:56.715 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.715 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.715 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.715 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.973 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:12:56.973 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: --dhchap-ctrl-secret DHHC-1:02:NjlkYWFmZDVmZjY4NzJkNGZkZmE3ZDNkZmQ5OTMwZjZhMmViYmE3NzhmYmQ4ZGNkFxX1IQ==: 00:12:57.908 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.908 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:12:57.908 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.908 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.908 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.908 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.908 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:57.908 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:58.167 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:58.167 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.167 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:58.167 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:58.167 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:58.167 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.167 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.167 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.167 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.167 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.167 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.167 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.167 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.735 00:12:58.735 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.735 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.735 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.993 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.994 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.994 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.994 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.994 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.994 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.994 { 00:12:58.994 "cntlid": 141, 00:12:58.994 "qid": 0, 00:12:58.994 "state": "enabled", 00:12:58.994 "thread": "nvmf_tgt_poll_group_000", 00:12:58.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:12:58.994 "listen_address": { 00:12:58.994 "trtype": "TCP", 00:12:58.994 "adrfam": "IPv4", 00:12:58.994 "traddr": "10.0.0.3", 00:12:58.994 "trsvcid": "4420" 00:12:58.994 }, 00:12:58.994 "peer_address": { 00:12:58.994 "trtype": "TCP", 00:12:58.994 "adrfam": "IPv4", 00:12:58.994 "traddr": "10.0.0.1", 00:12:58.994 "trsvcid": "38608" 00:12:58.994 }, 00:12:58.994 "auth": { 00:12:58.994 "state": "completed", 00:12:58.994 "digest": "sha512", 00:12:58.994 "dhgroup": "ffdhe8192" 00:12:58.994 } 00:12:58.994 } 00:12:58.994 ]' 00:12:58.994 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.253 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.253 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.253 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:59.253 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.253 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.253 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.253 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.512 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:12:59.512 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:01:OTNiMTBmZjRlNzUwMmY5MGM5MDYzMWE5ZjVhOWNhNWVzIQXn: 00:13:00.445 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.445 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:13:00.445 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.445 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.445 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.445 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.445 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:00.445 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:00.703 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:00.703 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.703 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:00.703 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:00.703 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:00.703 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.703 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:13:00.703 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.704 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.704 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.704 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:00.704 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:00.704 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:01.293 00:13:01.293 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.293 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.293 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.551 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.809 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.809 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.809 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.809 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.809 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.809 { 00:13:01.809 "cntlid": 143, 00:13:01.809 "qid": 0, 00:13:01.809 "state": "enabled", 00:13:01.809 "thread": "nvmf_tgt_poll_group_000", 00:13:01.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:13:01.809 "listen_address": { 00:13:01.809 "trtype": "TCP", 00:13:01.809 "adrfam": "IPv4", 00:13:01.809 "traddr": "10.0.0.3", 00:13:01.809 "trsvcid": "4420" 00:13:01.809 }, 00:13:01.809 "peer_address": { 00:13:01.809 "trtype": "TCP", 00:13:01.809 "adrfam": "IPv4", 00:13:01.809 "traddr": "10.0.0.1", 00:13:01.809 "trsvcid": "38626" 00:13:01.809 }, 00:13:01.809 "auth": { 00:13:01.809 "state": "completed", 00:13:01.809 "digest": "sha512", 00:13:01.809 "dhgroup": "ffdhe8192" 00:13:01.809 } 00:13:01.809 } 00:13:01.809 ]' 00:13:01.809 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.809 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.809 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.809 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:01.809 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.809 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.809 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.809 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.375 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:13:02.375 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:13:02.942 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.942 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:13:02.942 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.942 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.942 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.942 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:02.942 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:02.942 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:02.942 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:02.942 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:02.942 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:03.201 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:03.201 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.201 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:03.201 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:03.201 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:03.201 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.201 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.201 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.201 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.201 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.201 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.201 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.201 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.137 00:13:04.137 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.137 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.137 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.137 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.137 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.137 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.137 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.137 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.137 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.137 { 00:13:04.137 "cntlid": 145, 00:13:04.137 "qid": 0, 00:13:04.137 "state": "enabled", 00:13:04.137 "thread": "nvmf_tgt_poll_group_000", 00:13:04.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:13:04.137 "listen_address": { 00:13:04.137 "trtype": "TCP", 00:13:04.137 "adrfam": "IPv4", 00:13:04.137 "traddr": "10.0.0.3", 00:13:04.137 "trsvcid": "4420" 00:13:04.137 }, 00:13:04.137 "peer_address": { 00:13:04.137 "trtype": "TCP", 00:13:04.137 "adrfam": "IPv4", 00:13:04.137 "traddr": "10.0.0.1", 00:13:04.137 "trsvcid": "38644" 00:13:04.137 }, 00:13:04.137 "auth": { 00:13:04.137 "state": "completed", 00:13:04.137 "digest": "sha512", 00:13:04.137 "dhgroup": "ffdhe8192" 00:13:04.137 } 00:13:04.137 } 00:13:04.137 ]' 00:13:04.137 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.137 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.137 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.395 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:04.395 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.395 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.395 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.395 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.652 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:13:04.652 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:00:NjY2ZjExNTQ5ZDUzYTY3NTJhMmQ3YzZlZDlhM2Y5NTMyODVmYzhlNTRjNzVkZDVm+OZ69A==: --dhchap-ctrl-secret DHHC-1:03:NDhjYTcwOTMzZTMzNTFhY2I3NDFiMzRkNzljZmViYzE1M2QwYjJhOWNlYzA4ZDJmOTg3ZTk1MzRjMTcwNzA3MJLZU+I=: 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:05.586 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:06.154 request: 00:13:06.154 { 00:13:06.154 "name": "nvme0", 00:13:06.154 "trtype": "tcp", 00:13:06.154 "traddr": "10.0.0.3", 00:13:06.154 "adrfam": "ipv4", 00:13:06.154 "trsvcid": "4420", 00:13:06.154 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:06.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:13:06.154 "prchk_reftag": false, 00:13:06.154 "prchk_guard": false, 00:13:06.154 "hdgst": false, 00:13:06.154 "ddgst": false, 00:13:06.154 "dhchap_key": "key2", 00:13:06.154 "allow_unrecognized_csi": false, 00:13:06.154 "method": "bdev_nvme_attach_controller", 00:13:06.154 "req_id": 1 00:13:06.154 } 00:13:06.154 Got JSON-RPC error response 00:13:06.154 response: 00:13:06.154 { 00:13:06.154 "code": -5, 00:13:06.154 "message": "Input/output error" 00:13:06.154 } 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:06.154 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:06.722 request: 00:13:06.722 { 00:13:06.722 "name": "nvme0", 00:13:06.722 "trtype": "tcp", 00:13:06.722 "traddr": "10.0.0.3", 00:13:06.722 "adrfam": "ipv4", 00:13:06.722 "trsvcid": "4420", 00:13:06.722 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:06.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:13:06.722 "prchk_reftag": false, 00:13:06.722 "prchk_guard": false, 00:13:06.722 "hdgst": false, 00:13:06.722 "ddgst": false, 00:13:06.722 "dhchap_key": "key1", 00:13:06.722 "dhchap_ctrlr_key": "ckey2", 00:13:06.722 "allow_unrecognized_csi": false, 00:13:06.722 "method": "bdev_nvme_attach_controller", 00:13:06.722 "req_id": 1 00:13:06.722 } 00:13:06.722 Got JSON-RPC error response 00:13:06.722 response: 00:13:06.722 { 00:13:06.722 "code": -5, 00:13:06.722 "message": "Input/output error" 00:13:06.722 } 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.722 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.291 request: 00:13:07.291 { 00:13:07.291 "name": "nvme0", 00:13:07.291 "trtype": "tcp", 00:13:07.291 "traddr": "10.0.0.3", 00:13:07.291 "adrfam": "ipv4", 00:13:07.291 "trsvcid": "4420", 00:13:07.291 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:07.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:13:07.291 "prchk_reftag": false, 00:13:07.291 "prchk_guard": false, 00:13:07.291 "hdgst": false, 00:13:07.291 "ddgst": false, 00:13:07.291 "dhchap_key": "key1", 00:13:07.291 "dhchap_ctrlr_key": "ckey1", 00:13:07.291 "allow_unrecognized_csi": false, 00:13:07.291 "method": "bdev_nvme_attach_controller", 00:13:07.291 "req_id": 1 00:13:07.291 } 00:13:07.291 Got JSON-RPC error response 00:13:07.291 response: 00:13:07.291 { 00:13:07.291 "code": -5, 00:13:07.291 "message": "Input/output error" 00:13:07.291 } 00:13:07.552 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:07.552 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:07.552 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:07.552 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:07.552 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:13:07.552 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.552 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.552 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.552 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 66989 00:13:07.552 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 66989 ']' 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 66989 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66989 00:13:07.553 killing process with pid 66989 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66989' 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 66989 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 66989 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=70718 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 70718 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70718 ']' 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:07.553 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.852 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:07.852 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:07.852 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:07.852 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:07.852 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.111 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.111 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:08.111 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70718 00:13:08.111 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70718 ']' 00:13:08.111 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.111 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:08.111 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.111 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:08.111 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.369 null0 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.woV 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.ANI ]] 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ANI 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.cIe 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.8hS ]] 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8hS 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.bPJ 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.YiY ]] 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YiY 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.HQx 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.369 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:09.748 nvme0n1 00:13:09.748 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.748 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.748 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.748 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.748 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.748 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.748 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.748 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.748 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.748 { 00:13:09.748 "cntlid": 1, 00:13:09.748 "qid": 0, 00:13:09.748 "state": "enabled", 00:13:09.748 "thread": "nvmf_tgt_poll_group_000", 00:13:09.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:13:09.748 "listen_address": { 00:13:09.748 "trtype": "TCP", 00:13:09.748 "adrfam": "IPv4", 00:13:09.748 "traddr": "10.0.0.3", 00:13:09.748 "trsvcid": "4420" 00:13:09.748 }, 00:13:09.748 "peer_address": { 00:13:09.748 "trtype": "TCP", 00:13:09.748 "adrfam": "IPv4", 00:13:09.748 "traddr": "10.0.0.1", 00:13:09.748 "trsvcid": "51276" 00:13:09.748 }, 00:13:09.748 "auth": { 00:13:09.748 "state": "completed", 00:13:09.748 "digest": "sha512", 00:13:09.748 "dhgroup": "ffdhe8192" 00:13:09.748 } 00:13:09.748 } 00:13:09.748 ]' 00:13:09.748 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.748 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:09.748 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.008 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:10.008 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.008 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.008 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.008 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.266 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:13:10.266 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:13:11.200 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.200 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:13:11.200 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.200 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.200 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.200 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key3 00:13:11.200 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.200 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.200 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.200 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:11.200 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:11.459 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:11.459 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:11.459 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:11.459 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:11.459 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.459 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:11.459 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.459 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:11.459 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:11.459 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:11.717 request: 00:13:11.717 { 00:13:11.717 "name": "nvme0", 00:13:11.717 "trtype": "tcp", 00:13:11.717 "traddr": "10.0.0.3", 00:13:11.717 "adrfam": "ipv4", 00:13:11.717 "trsvcid": "4420", 00:13:11.717 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:11.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:13:11.717 "prchk_reftag": false, 00:13:11.717 "prchk_guard": false, 00:13:11.717 "hdgst": false, 00:13:11.717 "ddgst": false, 00:13:11.717 "dhchap_key": "key3", 00:13:11.717 "allow_unrecognized_csi": false, 00:13:11.717 "method": "bdev_nvme_attach_controller", 00:13:11.717 "req_id": 1 00:13:11.717 } 00:13:11.717 Got JSON-RPC error response 00:13:11.717 response: 00:13:11.717 { 00:13:11.717 "code": -5, 00:13:11.717 "message": "Input/output error" 00:13:11.717 } 00:13:11.717 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:11.717 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:11.717 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:11.717 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:11.717 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:11.717 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:11.717 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:11.717 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:11.975 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:11.975 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:11.975 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:11.975 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:11.975 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.975 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:11.975 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.975 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:11.975 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:11.976 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:12.235 request: 00:13:12.235 { 00:13:12.235 "name": "nvme0", 00:13:12.235 "trtype": "tcp", 00:13:12.235 "traddr": "10.0.0.3", 00:13:12.235 "adrfam": "ipv4", 00:13:12.235 "trsvcid": "4420", 00:13:12.235 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:12.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:13:12.235 "prchk_reftag": false, 00:13:12.235 "prchk_guard": false, 00:13:12.235 "hdgst": false, 00:13:12.235 "ddgst": false, 00:13:12.235 "dhchap_key": "key3", 00:13:12.235 "allow_unrecognized_csi": false, 00:13:12.235 "method": "bdev_nvme_attach_controller", 00:13:12.235 "req_id": 1 00:13:12.235 } 00:13:12.235 Got JSON-RPC error response 00:13:12.235 response: 00:13:12.235 { 00:13:12.235 "code": -5, 00:13:12.235 "message": "Input/output error" 00:13:12.235 } 00:13:12.235 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:12.235 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:12.235 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:12.235 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:12.235 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:12.235 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:12.235 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:12.235 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:12.235 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:12.235 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:12.493 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:13:12.493 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.493 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.751 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.751 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:13:12.751 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.751 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.751 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.751 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:12.751 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:12.751 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:12.751 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:12.751 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.751 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:12.751 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.751 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:12.751 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:12.751 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:13.009 request: 00:13:13.009 { 00:13:13.009 "name": "nvme0", 00:13:13.009 "trtype": "tcp", 00:13:13.009 "traddr": "10.0.0.3", 00:13:13.009 "adrfam": "ipv4", 00:13:13.009 "trsvcid": "4420", 00:13:13.009 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:13.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:13:13.009 "prchk_reftag": false, 00:13:13.009 "prchk_guard": false, 00:13:13.009 "hdgst": false, 00:13:13.009 "ddgst": false, 00:13:13.009 "dhchap_key": "key0", 00:13:13.009 "dhchap_ctrlr_key": "key1", 00:13:13.009 "allow_unrecognized_csi": false, 00:13:13.009 "method": "bdev_nvme_attach_controller", 00:13:13.009 "req_id": 1 00:13:13.009 } 00:13:13.009 Got JSON-RPC error response 00:13:13.009 response: 00:13:13.009 { 00:13:13.009 "code": -5, 00:13:13.009 "message": "Input/output error" 00:13:13.009 } 00:13:13.267 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:13.267 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:13.267 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:13.267 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:13.267 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:13.267 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:13.267 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:13.525 nvme0n1 00:13:13.525 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:13.525 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.525 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:13.807 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.807 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.807 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.065 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 00:13:14.065 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.065 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.324 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.324 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:14.324 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:14.324 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:15.257 nvme0n1 00:13:15.515 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:15.515 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.515 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:15.773 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.773 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:15.773 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.773 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.773 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.773 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:15.773 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.773 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:16.031 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.031 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:13:16.031 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --hostid 2b7d6042-0a58-4103-9990-589a1a785035 -l 0 --dhchap-secret DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: --dhchap-ctrl-secret DHHC-1:03:YTI1OTEwYTdmM2U0ZjdjNDg2NjNhZDA5M2ZmMjU1YmEwNWY2Mjc0YjEwNzI5OGM5Yzg0MzY3MTdiYjYxNTRjM9m3Nv0=: 00:13:16.964 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:16.964 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:16.964 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:16.964 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:16.964 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:16.964 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:16.964 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:16.964 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.964 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.222 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:17.222 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:17.222 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:17.222 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:17.222 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.222 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:17.222 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.222 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:17.222 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:17.222 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:17.786 request: 00:13:17.786 { 00:13:17.786 "name": "nvme0", 00:13:17.786 "trtype": "tcp", 00:13:17.786 "traddr": "10.0.0.3", 00:13:17.786 "adrfam": "ipv4", 00:13:17.786 "trsvcid": "4420", 00:13:17.786 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:17.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035", 00:13:17.786 "prchk_reftag": false, 00:13:17.786 "prchk_guard": false, 00:13:17.786 "hdgst": false, 00:13:17.786 "ddgst": false, 00:13:17.786 "dhchap_key": "key1", 00:13:17.786 "allow_unrecognized_csi": false, 00:13:17.786 "method": "bdev_nvme_attach_controller", 00:13:17.786 "req_id": 1 00:13:17.786 } 00:13:17.786 Got JSON-RPC error response 00:13:17.786 response: 00:13:17.786 { 00:13:17.786 "code": -5, 00:13:17.787 "message": "Input/output error" 00:13:17.787 } 00:13:17.787 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:17.787 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:17.787 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:17.787 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:17.787 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:17.787 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:17.787 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:19.161 nvme0n1 00:13:19.161 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:19.161 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.161 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:19.419 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.419 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.419 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.676 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:13:19.676 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.676 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.676 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.676 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:19.676 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:19.676 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:20.243 nvme0n1 00:13:20.243 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:20.243 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.243 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:20.500 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.500 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.500 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: '' 2s 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: ]] 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YWJiODk4OGMxOWJmMjAwM2IyM2YwYzRjYzZkNjA4NzEDbuu2: 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:20.759 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:22.661 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:22.661 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:13:22.661 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:13:22.661 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:13:22.661 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:13:22.661 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: 2s 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: ]] 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MmUzMGQ1N2IwNjk0NDFmNjRmYWM4MDE4MGVjNTQwY2VhMDMxOGMyMjhkY2M5N2Q3x0fWbg==: 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:22.919 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:24.821 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:24.821 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:13:24.821 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:13:24.821 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:13:24.821 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:13:24.821 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:13:24.821 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:13:24.821 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.821 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:24.821 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.821 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.821 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.821 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:24.821 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:24.821 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:26.198 nvme0n1 00:13:26.198 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:26.198 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.198 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.198 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.198 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:26.198 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:26.767 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:26.767 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:26.767 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.026 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.026 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:13:27.026 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.026 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.026 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.026 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:27.026 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:27.285 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:27.285 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:27.285 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.545 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.545 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:27.545 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.545 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.545 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.545 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:27.545 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:27.545 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:27.545 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:27.545 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.545 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:27.545 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.545 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:27.545 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:28.483 request: 00:13:28.483 { 00:13:28.483 "name": "nvme0", 00:13:28.483 "dhchap_key": "key1", 00:13:28.483 "dhchap_ctrlr_key": "key3", 00:13:28.483 "method": "bdev_nvme_set_keys", 00:13:28.483 "req_id": 1 00:13:28.483 } 00:13:28.483 Got JSON-RPC error response 00:13:28.483 response: 00:13:28.483 { 00:13:28.483 "code": -13, 00:13:28.483 "message": "Permission denied" 00:13:28.483 } 00:13:28.483 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:28.483 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:28.483 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:28.483 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:28.483 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:28.483 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:28.483 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.483 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:28.483 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:29.860 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:29.860 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:29.860 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.860 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:29.860 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:29.860 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.860 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.860 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.860 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:29.860 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:29.860 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:31.254 nvme0n1 00:13:31.254 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:31.254 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.254 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.254 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.254 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:31.254 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:31.254 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:31.254 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:31.254 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.254 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:31.254 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.254 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:31.254 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:31.821 request: 00:13:31.821 { 00:13:31.821 "name": "nvme0", 00:13:31.821 "dhchap_key": "key2", 00:13:31.821 "dhchap_ctrlr_key": "key0", 00:13:31.821 "method": "bdev_nvme_set_keys", 00:13:31.821 "req_id": 1 00:13:31.821 } 00:13:31.821 Got JSON-RPC error response 00:13:31.821 response: 00:13:31.821 { 00:13:31.821 "code": -13, 00:13:31.821 "message": "Permission denied" 00:13:31.821 } 00:13:31.821 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:31.821 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:31.821 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:31.821 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:31.821 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:31.821 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.821 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:32.080 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:13:32.080 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:13:33.016 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:33.017 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:33.017 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.275 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:13:33.275 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:13:33.275 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:13:33.275 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67019 00:13:33.275 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67019 ']' 00:13:33.275 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67019 00:13:33.275 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:33.534 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:33.534 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67019 00:13:33.534 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:33.534 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:33.534 killing process with pid 67019 00:13:33.534 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67019' 00:13:33.534 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67019 00:13:33.534 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67019 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:33.793 rmmod nvme_tcp 00:13:33.793 rmmod nvme_fabrics 00:13:33.793 rmmod nvme_keyring 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 70718 ']' 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 70718 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 70718 ']' 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 70718 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70718 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:33.793 killing process with pid 70718 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70718' 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 70718 00:13:33.793 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 70718 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:34.051 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:34.309 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:34.309 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:34.309 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:34.309 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.309 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.309 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.309 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:13:34.309 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.woV /tmp/spdk.key-sha256.cIe /tmp/spdk.key-sha384.bPJ /tmp/spdk.key-sha512.HQx /tmp/spdk.key-sha512.ANI /tmp/spdk.key-sha384.8hS /tmp/spdk.key-sha256.YiY '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:34.309 00:13:34.309 real 4m22.116s 00:13:34.309 user 9m58.805s 00:13:34.309 sys 0m37.913s 00:13:34.309 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:34.309 ************************************ 00:13:34.309 END TEST nvmf_auth_target 00:13:34.309 ************************************ 00:13:34.310 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.310 13:41:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:34.310 13:41:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:34.310 13:41:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:34.310 13:41:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:34.310 13:41:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:34.310 ************************************ 00:13:34.310 START TEST nvmf_bdevio_no_huge 00:13:34.310 ************************************ 00:13:34.310 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:34.310 * Looking for test storage... 00:13:34.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:34.310 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:34.310 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:13:34.310 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:34.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.570 --rc genhtml_branch_coverage=1 00:13:34.570 --rc genhtml_function_coverage=1 00:13:34.570 --rc genhtml_legend=1 00:13:34.570 --rc geninfo_all_blocks=1 00:13:34.570 --rc geninfo_unexecuted_blocks=1 00:13:34.570 00:13:34.570 ' 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:34.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.570 --rc genhtml_branch_coverage=1 00:13:34.570 --rc genhtml_function_coverage=1 00:13:34.570 --rc genhtml_legend=1 00:13:34.570 --rc geninfo_all_blocks=1 00:13:34.570 --rc geninfo_unexecuted_blocks=1 00:13:34.570 00:13:34.570 ' 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:34.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.570 --rc genhtml_branch_coverage=1 00:13:34.570 --rc genhtml_function_coverage=1 00:13:34.570 --rc genhtml_legend=1 00:13:34.570 --rc geninfo_all_blocks=1 00:13:34.570 --rc geninfo_unexecuted_blocks=1 00:13:34.570 00:13:34.570 ' 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:34.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.570 --rc genhtml_branch_coverage=1 00:13:34.570 --rc genhtml_function_coverage=1 00:13:34.570 --rc genhtml_legend=1 00:13:34.570 --rc geninfo_all_blocks=1 00:13:34.570 --rc geninfo_unexecuted_blocks=1 00:13:34.570 00:13:34.570 ' 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.570 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:34.571 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@456 -- # nvmf_veth_init 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:34.571 Cannot find device "nvmf_init_br" 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:34.571 Cannot find device "nvmf_init_br2" 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:34.571 Cannot find device "nvmf_tgt_br" 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:34.571 Cannot find device "nvmf_tgt_br2" 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:34.571 Cannot find device "nvmf_init_br" 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:34.571 Cannot find device "nvmf_init_br2" 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:34.571 Cannot find device "nvmf_tgt_br" 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:34.571 Cannot find device "nvmf_tgt_br2" 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:34.571 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:34.831 Cannot find device "nvmf_br" 00:13:34.831 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:34.831 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:34.831 Cannot find device "nvmf_init_if" 00:13:34.831 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:13:34.831 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:34.831 Cannot find device "nvmf_init_if2" 00:13:34.831 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:13:34.831 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:34.831 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.831 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:13:34.831 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:34.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:34.832 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:35.092 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:35.092 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:13:35.092 00:13:35.092 --- 10.0.0.3 ping statistics --- 00:13:35.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.092 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:35.092 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:35.092 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:13:35.092 00:13:35.092 --- 10.0.0.4 ping statistics --- 00:13:35.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.092 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:35.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:13:35.092 00:13:35.092 --- 10.0.0.1 ping statistics --- 00:13:35.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.092 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:35.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:13:35.092 00:13:35.092 --- 10.0.0.2 ping statistics --- 00:13:35.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.092 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # return 0 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=71368 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 71368 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 71368 ']' 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:35.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:35.092 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:35.092 [2024-10-01 13:41:26.820322] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:13:35.092 [2024-10-01 13:41:26.820456] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:35.351 [2024-10-01 13:41:26.973282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.351 [2024-10-01 13:41:27.121259] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.351 [2024-10-01 13:41:27.121356] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.351 [2024-10-01 13:41:27.121383] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.351 [2024-10-01 13:41:27.121401] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.351 [2024-10-01 13:41:27.121416] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.351 [2024-10-01 13:41:27.121615] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:13:35.351 [2024-10-01 13:41:27.122288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:13:35.351 [2024-10-01 13:41:27.122382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:13:35.351 [2024-10-01 13:41:27.122393] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.351 [2024-10-01 13:41:27.128869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:36.287 [2024-10-01 13:41:27.912824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:36.287 Malloc0 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:36.287 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:36.288 [2024-10-01 13:41:27.957203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:13:36.288 { 00:13:36.288 "params": { 00:13:36.288 "name": "Nvme$subsystem", 00:13:36.288 "trtype": "$TEST_TRANSPORT", 00:13:36.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:36.288 "adrfam": "ipv4", 00:13:36.288 "trsvcid": "$NVMF_PORT", 00:13:36.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:36.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:36.288 "hdgst": ${hdgst:-false}, 00:13:36.288 "ddgst": ${ddgst:-false} 00:13:36.288 }, 00:13:36.288 "method": "bdev_nvme_attach_controller" 00:13:36.288 } 00:13:36.288 EOF 00:13:36.288 )") 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:13:36.288 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:13:36.288 "params": { 00:13:36.288 "name": "Nvme1", 00:13:36.288 "trtype": "tcp", 00:13:36.288 "traddr": "10.0.0.3", 00:13:36.288 "adrfam": "ipv4", 00:13:36.288 "trsvcid": "4420", 00:13:36.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:36.288 "hdgst": false, 00:13:36.288 "ddgst": false 00:13:36.288 }, 00:13:36.288 "method": "bdev_nvme_attach_controller" 00:13:36.288 }' 00:13:36.288 [2024-10-01 13:41:28.007995] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:13:36.288 [2024-10-01 13:41:28.008088] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71410 ] 00:13:36.546 [2024-10-01 13:41:28.152152] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:36.547 [2024-10-01 13:41:28.289656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.547 [2024-10-01 13:41:28.289790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.547 [2024-10-01 13:41:28.289797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.547 [2024-10-01 13:41:28.304861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:36.805 I/O targets: 00:13:36.805 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:36.805 00:13:36.805 00:13:36.805 CUnit - A unit testing framework for C - Version 2.1-3 00:13:36.805 http://cunit.sourceforge.net/ 00:13:36.805 00:13:36.805 00:13:36.805 Suite: bdevio tests on: Nvme1n1 00:13:36.806 Test: blockdev write read block ...passed 00:13:36.806 Test: blockdev write zeroes read block ...passed 00:13:36.806 Test: blockdev write zeroes read no split ...passed 00:13:36.806 Test: blockdev write zeroes read split ...passed 00:13:36.806 Test: blockdev write zeroes read split partial ...passed 00:13:36.806 Test: blockdev reset ...[2024-10-01 13:41:28.544684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:36.806 [2024-10-01 13:41:28.544856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ee720 (9): Bad file descriptor 00:13:36.806 passed 00:13:36.806 Test: blockdev write read 8 blocks ...[2024-10-01 13:41:28.557677] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:36.806 passed 00:13:36.806 Test: blockdev write read size > 128k ...passed 00:13:36.806 Test: blockdev write read invalid size ...passed 00:13:36.806 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:36.806 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:36.806 Test: blockdev write read max offset ...passed 00:13:36.806 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:36.806 Test: blockdev writev readv 8 blocks ...passed 00:13:36.806 Test: blockdev writev readv 30 x 1block ...passed 00:13:36.806 Test: blockdev writev readv block ...passed 00:13:36.806 Test: blockdev writev readv size > 128k ...passed 00:13:36.806 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:36.806 Test: blockdev comparev and writev ...[2024-10-01 13:41:28.566781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.806 [2024-10-01 13:41:28.566958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:36.806 [2024-10-01 13:41:28.566987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.806 [2024-10-01 13:41:28.566998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:36.806 [2024-10-01 13:41:28.567321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.806 [2024-10-01 13:41:28.567340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:36.806 [2024-10-01 13:41:28.567357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.806 [2024-10-01 13:41:28.567367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:36.806 [2024-10-01 13:41:28.567667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.806 [2024-10-01 13:41:28.567686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:36.806 [2024-10-01 13:41:28.567708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.806 [2024-10-01 13:41:28.567719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:36.806 [2024-10-01 13:41:28.568019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.806 [2024-10-01 13:41:28.568036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:36.806 [2024-10-01 13:41:28.568053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.806 [2024-10-01 13:41:28.568063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:36.806 passed 00:13:36.806 Test: blockdev nvme passthru rw ...passed 00:13:36.806 Test: blockdev nvme passthru vendor specific ...[2024-10-01 13:41:28.569162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOpassed 00:13:36.806 Test: blockdev nvme admin passthru ...CK OFFSET 0x0 len:0x0 00:13:36.806 [2024-10-01 13:41:28.569331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:36.806 [2024-10-01 13:41:28.569473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:36.806 [2024-10-01 13:41:28.569496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:36.806 [2024-10-01 13:41:28.569633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:36.806 [2024-10-01 13:41:28.569655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:36.806 [2024-10-01 13:41:28.569765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:36.806 [2024-10-01 13:41:28.569786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:36.806 passed 00:13:36.806 Test: blockdev copy ...passed 00:13:36.806 00:13:36.806 Run Summary: Type Total Ran Passed Failed Inactive 00:13:36.806 suites 1 1 n/a 0 0 00:13:36.806 tests 23 23 23 0 0 00:13:36.806 asserts 152 152 152 0 n/a 00:13:36.806 00:13:36.806 Elapsed time = 0.180 seconds 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:37.373 rmmod nvme_tcp 00:13:37.373 rmmod nvme_fabrics 00:13:37.373 rmmod nvme_keyring 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 71368 ']' 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 71368 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 71368 ']' 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 71368 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71368 00:13:37.373 killing process with pid 71368 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71368' 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 71368 00:13:37.373 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 71368 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.940 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.199 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:38.199 00:13:38.199 real 0m3.753s 00:13:38.199 user 0m11.181s 00:13:38.199 sys 0m1.476s 00:13:38.199 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.199 ************************************ 00:13:38.199 END TEST nvmf_bdevio_no_huge 00:13:38.199 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:38.199 ************************************ 00:13:38.199 13:41:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:38.199 13:41:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:38.199 13:41:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.199 13:41:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:38.199 ************************************ 00:13:38.199 START TEST nvmf_tls 00:13:38.199 ************************************ 00:13:38.199 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:38.199 * Looking for test storage... 00:13:38.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:38.199 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:38.199 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:38.199 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:38.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.200 --rc genhtml_branch_coverage=1 00:13:38.200 --rc genhtml_function_coverage=1 00:13:38.200 --rc genhtml_legend=1 00:13:38.200 --rc geninfo_all_blocks=1 00:13:38.200 --rc geninfo_unexecuted_blocks=1 00:13:38.200 00:13:38.200 ' 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:38.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.200 --rc genhtml_branch_coverage=1 00:13:38.200 --rc genhtml_function_coverage=1 00:13:38.200 --rc genhtml_legend=1 00:13:38.200 --rc geninfo_all_blocks=1 00:13:38.200 --rc geninfo_unexecuted_blocks=1 00:13:38.200 00:13:38.200 ' 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:38.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.200 --rc genhtml_branch_coverage=1 00:13:38.200 --rc genhtml_function_coverage=1 00:13:38.200 --rc genhtml_legend=1 00:13:38.200 --rc geninfo_all_blocks=1 00:13:38.200 --rc geninfo_unexecuted_blocks=1 00:13:38.200 00:13:38.200 ' 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:38.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.200 --rc genhtml_branch_coverage=1 00:13:38.200 --rc genhtml_function_coverage=1 00:13:38.200 --rc genhtml_legend=1 00:13:38.200 --rc geninfo_all_blocks=1 00:13:38.200 --rc geninfo_unexecuted_blocks=1 00:13:38.200 00:13:38.200 ' 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.200 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:38.459 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:13:38.459 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@456 -- # nvmf_veth_init 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:38.460 Cannot find device "nvmf_init_br" 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:38.460 Cannot find device "nvmf_init_br2" 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:38.460 Cannot find device "nvmf_tgt_br" 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:38.460 Cannot find device "nvmf_tgt_br2" 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:38.460 Cannot find device "nvmf_init_br" 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:38.460 Cannot find device "nvmf_init_br2" 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:38.460 Cannot find device "nvmf_tgt_br" 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:38.460 Cannot find device "nvmf_tgt_br2" 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:38.460 Cannot find device "nvmf_br" 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:38.460 Cannot find device "nvmf_init_if" 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:38.460 Cannot find device "nvmf_init_if2" 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:38.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:38.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:38.460 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:38.719 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:38.719 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:13:38.719 00:13:38.719 --- 10.0.0.3 ping statistics --- 00:13:38.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.719 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:38.719 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:38.719 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:13:38.719 00:13:38.719 --- 10.0.0.4 ping statistics --- 00:13:38.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.719 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:13:38.719 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:38.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:13:38.719 00:13:38.719 --- 10.0.0.1 ping statistics --- 00:13:38.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.719 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:38.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:13:38.720 00:13:38.720 --- 10.0.0.2 ping statistics --- 00:13:38.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.720 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # return 0 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=71648 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 71648 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71648 ']' 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:38.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:38.720 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.977 [2024-10-01 13:41:30.609164] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:13:38.977 [2024-10-01 13:41:30.609274] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.977 [2024-10-01 13:41:30.756026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.977 [2024-10-01 13:41:30.825259] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.977 [2024-10-01 13:41:30.825314] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.977 [2024-10-01 13:41:30.825326] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.977 [2024-10-01 13:41:30.825334] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.977 [2024-10-01 13:41:30.825341] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.977 [2024-10-01 13:41:30.825372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.236 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.236 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:39.236 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:39.236 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:39.236 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.236 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.236 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:39.236 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:39.495 true 00:13:39.495 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:39.495 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:39.754 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:39.754 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:39.754 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:40.329 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:40.329 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:40.329 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:40.329 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:40.329 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:40.895 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:40.895 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:41.153 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:41.153 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:41.153 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:41.153 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:41.412 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:41.412 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:41.412 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:41.670 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:41.670 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:41.926 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:41.926 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:41.926 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:42.183 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:42.183 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:42.441 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:42.699 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.TSY4SWMLTK 00:13:42.699 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:42.699 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.CS4Y3s6kQV 00:13:42.699 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:42.699 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:42.699 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.TSY4SWMLTK 00:13:42.699 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.CS4Y3s6kQV 00:13:42.699 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:42.959 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:43.217 [2024-10-01 13:41:34.932692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:43.217 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.TSY4SWMLTK 00:13:43.217 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TSY4SWMLTK 00:13:43.217 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:43.506 [2024-10-01 13:41:35.219680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.506 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:43.764 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:44.331 [2024-10-01 13:41:35.943859] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:44.331 [2024-10-01 13:41:35.944126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:44.331 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:44.590 malloc0 00:13:44.590 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:44.849 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TSY4SWMLTK 00:13:45.107 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:45.366 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TSY4SWMLTK 00:13:57.569 Initializing NVMe Controllers 00:13:57.569 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:57.569 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:57.569 Initialization complete. Launching workers. 00:13:57.569 ======================================================== 00:13:57.569 Latency(us) 00:13:57.569 Device Information : IOPS MiB/s Average min max 00:13:57.569 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9607.40 37.53 6663.03 1418.51 8568.36 00:13:57.569 ======================================================== 00:13:57.569 Total : 9607.40 37.53 6663.03 1418.51 8568.36 00:13:57.569 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TSY4SWMLTK 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TSY4SWMLTK 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71886 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71886 /var/tmp/bdevperf.sock 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71886 ']' 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:57.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:57.569 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.569 [2024-10-01 13:41:47.394911] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:13:57.569 [2024-10-01 13:41:47.395035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71886 ] 00:13:57.569 [2024-10-01 13:41:47.538625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.569 [2024-10-01 13:41:47.609095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.569 [2024-10-01 13:41:47.643239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:57.570 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:57.570 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:57.570 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TSY4SWMLTK 00:13:57.570 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:57.570 [2024-10-01 13:41:48.308742] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:57.570 TLSTESTn1 00:13:57.570 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:57.570 Running I/O for 10 seconds... 00:14:06.762 4002.00 IOPS, 15.63 MiB/s 4015.00 IOPS, 15.68 MiB/s 4017.33 IOPS, 15.69 MiB/s 4026.25 IOPS, 15.73 MiB/s 4030.20 IOPS, 15.74 MiB/s 4032.50 IOPS, 15.75 MiB/s 4033.86 IOPS, 15.76 MiB/s 4032.50 IOPS, 15.75 MiB/s 4035.78 IOPS, 15.76 MiB/s 4035.70 IOPS, 15.76 MiB/s 00:14:06.762 Latency(us) 00:14:06.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.762 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:06.762 Verification LBA range: start 0x0 length 0x2000 00:14:06.762 TLSTESTn1 : 10.02 4041.82 15.79 0.00 0.00 31612.26 5064.15 25022.84 00:14:06.762 =================================================================================================================== 00:14:06.762 Total : 4041.82 15.79 0.00 0.00 31612.26 5064.15 25022.84 00:14:06.762 { 00:14:06.762 "results": [ 00:14:06.762 { 00:14:06.762 "job": "TLSTESTn1", 00:14:06.762 "core_mask": "0x4", 00:14:06.762 "workload": "verify", 00:14:06.762 "status": "finished", 00:14:06.762 "verify_range": { 00:14:06.762 "start": 0, 00:14:06.762 "length": 8192 00:14:06.762 }, 00:14:06.762 "queue_depth": 128, 00:14:06.762 "io_size": 4096, 00:14:06.762 "runtime": 10.015788, 00:14:06.762 "iops": 4041.818776515637, 00:14:06.762 "mibps": 15.788354595764208, 00:14:06.762 "io_failed": 0, 00:14:06.762 "io_timeout": 0, 00:14:06.762 "avg_latency_us": 31612.26281921033, 00:14:06.762 "min_latency_us": 5064.145454545454, 00:14:06.762 "max_latency_us": 25022.836363636365 00:14:06.762 } 00:14:06.762 ], 00:14:06.762 "core_count": 1 00:14:06.762 } 00:14:06.762 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:06.762 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71886 00:14:06.762 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71886 ']' 00:14:06.762 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71886 00:14:06.762 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:06.762 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.762 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71886 00:14:06.762 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:06.762 killing process with pid 71886 00:14:06.762 Received shutdown signal, test time was about 10.000000 seconds 00:14:06.762 00:14:06.762 Latency(us) 00:14:06.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.762 =================================================================================================================== 00:14:06.762 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:06.762 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:06.762 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71886' 00:14:06.762 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71886 00:14:06.762 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71886 00:14:07.021 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CS4Y3s6kQV 00:14:07.021 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:07.021 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CS4Y3s6kQV 00:14:07.021 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:07.021 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.021 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:07.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:07.021 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.021 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CS4Y3s6kQV 00:14:07.021 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:07.022 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:07.022 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:07.022 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CS4Y3s6kQV 00:14:07.022 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:07.022 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72013 00:14:07.022 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:07.022 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:07.022 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72013 /var/tmp/bdevperf.sock 00:14:07.022 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72013 ']' 00:14:07.022 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:07.022 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.022 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:07.022 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.022 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.022 [2024-10-01 13:41:58.804510] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:07.022 [2024-10-01 13:41:58.804894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72013 ] 00:14:07.281 [2024-10-01 13:41:58.944083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.281 [2024-10-01 13:41:59.002431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.281 [2024-10-01 13:41:59.032407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:08.218 13:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:08.218 13:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:08.218 13:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CS4Y3s6kQV 00:14:08.476 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:08.734 [2024-10-01 13:42:00.351509] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:08.734 [2024-10-01 13:42:00.357905] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:08.734 [2024-10-01 13:42:00.358401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15de090 (107): Transport endpoint is not connected 00:14:08.734 [2024-10-01 13:42:00.359393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15de090 (9): Bad file descriptor 00:14:08.734 [2024-10-01 13:42:00.360388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:08.734 [2024-10-01 13:42:00.360418] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:08.734 [2024-10-01 13:42:00.360431] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:08.734 [2024-10-01 13:42:00.360442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:08.734 request: 00:14:08.734 { 00:14:08.734 "name": "TLSTEST", 00:14:08.734 "trtype": "tcp", 00:14:08.734 "traddr": "10.0.0.3", 00:14:08.734 "adrfam": "ipv4", 00:14:08.734 "trsvcid": "4420", 00:14:08.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:08.734 "prchk_reftag": false, 00:14:08.735 "prchk_guard": false, 00:14:08.735 "hdgst": false, 00:14:08.735 "ddgst": false, 00:14:08.735 "psk": "key0", 00:14:08.735 "allow_unrecognized_csi": false, 00:14:08.735 "method": "bdev_nvme_attach_controller", 00:14:08.735 "req_id": 1 00:14:08.735 } 00:14:08.735 Got JSON-RPC error response 00:14:08.735 response: 00:14:08.735 { 00:14:08.735 "code": -5, 00:14:08.735 "message": "Input/output error" 00:14:08.735 } 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72013 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72013 ']' 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72013 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72013 00:14:08.735 killing process with pid 72013 00:14:08.735 Received shutdown signal, test time was about 10.000000 seconds 00:14:08.735 00:14:08.735 Latency(us) 00:14:08.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.735 =================================================================================================================== 00:14:08.735 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72013' 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72013 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72013 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TSY4SWMLTK 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TSY4SWMLTK 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TSY4SWMLTK 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TSY4SWMLTK 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72047 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72047 /var/tmp/bdevperf.sock 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72047 ']' 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:08.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:08.735 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.993 [2024-10-01 13:42:00.618985] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:08.994 [2024-10-01 13:42:00.619290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72047 ] 00:14:08.994 [2024-10-01 13:42:00.752890] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.994 [2024-10-01 13:42:00.823819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.252 [2024-10-01 13:42:00.855015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:09.818 13:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:09.818 13:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:09.818 13:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TSY4SWMLTK 00:14:10.077 13:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:10.335 [2024-10-01 13:42:02.186839] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:10.593 [2024-10-01 13:42:02.198076] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:10.594 [2024-10-01 13:42:02.198127] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:10.594 [2024-10-01 13:42:02.198184] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:10.594 [2024-10-01 13:42:02.198527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c2090 (107): Transport endpoint is not connected 00:14:10.594 [2024-10-01 13:42:02.199515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c2090 (9): Bad file descriptor 00:14:10.594 [2024-10-01 13:42:02.200512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:10.594 [2024-10-01 13:42:02.200547] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:10.594 [2024-10-01 13:42:02.200560] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:10.594 [2024-10-01 13:42:02.200571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:10.594 request: 00:14:10.594 { 00:14:10.594 "name": "TLSTEST", 00:14:10.594 "trtype": "tcp", 00:14:10.594 "traddr": "10.0.0.3", 00:14:10.594 "adrfam": "ipv4", 00:14:10.594 "trsvcid": "4420", 00:14:10.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.594 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:10.594 "prchk_reftag": false, 00:14:10.594 "prchk_guard": false, 00:14:10.594 "hdgst": false, 00:14:10.594 "ddgst": false, 00:14:10.594 "psk": "key0", 00:14:10.594 "allow_unrecognized_csi": false, 00:14:10.594 "method": "bdev_nvme_attach_controller", 00:14:10.594 "req_id": 1 00:14:10.594 } 00:14:10.594 Got JSON-RPC error response 00:14:10.594 response: 00:14:10.594 { 00:14:10.594 "code": -5, 00:14:10.594 "message": "Input/output error" 00:14:10.594 } 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72047 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72047 ']' 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72047 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72047 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:10.594 killing process with pid 72047 00:14:10.594 Received shutdown signal, test time was about 10.000000 seconds 00:14:10.594 00:14:10.594 Latency(us) 00:14:10.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.594 =================================================================================================================== 00:14:10.594 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72047' 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72047 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72047 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TSY4SWMLTK 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TSY4SWMLTK 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TSY4SWMLTK 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TSY4SWMLTK 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72070 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72070 /var/tmp/bdevperf.sock 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72070 ']' 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.594 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.852 [2024-10-01 13:42:02.471096] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:10.852 [2024-10-01 13:42:02.471196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72070 ] 00:14:10.852 [2024-10-01 13:42:02.609182] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.852 [2024-10-01 13:42:02.669105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.852 [2024-10-01 13:42:02.698498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:11.110 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:11.110 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:11.110 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TSY4SWMLTK 00:14:11.369 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:11.629 [2024-10-01 13:42:03.361689] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:11.629 [2024-10-01 13:42:03.368198] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:11.629 [2024-10-01 13:42:03.368484] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:11.629 [2024-10-01 13:42:03.368781] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:11.629 [2024-10-01 13:42:03.369647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227b090 (107): Transport endpoint is not connected 00:14:11.629 [2024-10-01 13:42:03.370637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227b090 (9): Bad file descriptor 00:14:11.629 [2024-10-01 13:42:03.371629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:11.629 [2024-10-01 13:42:03.371656] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:11.629 [2024-10-01 13:42:03.371669] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:11.629 [2024-10-01 13:42:03.371680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:11.629 request: 00:14:11.629 { 00:14:11.629 "name": "TLSTEST", 00:14:11.629 "trtype": "tcp", 00:14:11.629 "traddr": "10.0.0.3", 00:14:11.629 "adrfam": "ipv4", 00:14:11.629 "trsvcid": "4420", 00:14:11.629 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:11.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.629 "prchk_reftag": false, 00:14:11.629 "prchk_guard": false, 00:14:11.629 "hdgst": false, 00:14:11.629 "ddgst": false, 00:14:11.629 "psk": "key0", 00:14:11.629 "allow_unrecognized_csi": false, 00:14:11.629 "method": "bdev_nvme_attach_controller", 00:14:11.629 "req_id": 1 00:14:11.629 } 00:14:11.629 Got JSON-RPC error response 00:14:11.629 response: 00:14:11.629 { 00:14:11.629 "code": -5, 00:14:11.629 "message": "Input/output error" 00:14:11.629 } 00:14:11.629 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72070 00:14:11.629 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72070 ']' 00:14:11.629 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72070 00:14:11.629 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:11.629 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.629 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72070 00:14:11.629 killing process with pid 72070 00:14:11.629 Received shutdown signal, test time was about 10.000000 seconds 00:14:11.629 00:14:11.629 Latency(us) 00:14:11.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.629 =================================================================================================================== 00:14:11.629 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:11.629 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:11.629 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:11.629 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72070' 00:14:11.629 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72070 00:14:11.629 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72070 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72097 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72097 /var/tmp/bdevperf.sock 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72097 ']' 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:11.889 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.889 [2024-10-01 13:42:03.650030] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:11.889 [2024-10-01 13:42:03.650374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72097 ] 00:14:12.149 [2024-10-01 13:42:03.792289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.149 [2024-10-01 13:42:03.850359] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.149 [2024-10-01 13:42:03.879775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:13.145 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.145 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:13.145 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:13.145 [2024-10-01 13:42:04.890265] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:13.145 [2024-10-01 13:42:04.890574] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:13.145 request: 00:14:13.145 { 00:14:13.145 "name": "key0", 00:14:13.145 "path": "", 00:14:13.145 "method": "keyring_file_add_key", 00:14:13.145 "req_id": 1 00:14:13.145 } 00:14:13.145 Got JSON-RPC error response 00:14:13.145 response: 00:14:13.145 { 00:14:13.145 "code": -1, 00:14:13.145 "message": "Operation not permitted" 00:14:13.145 } 00:14:13.145 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:13.403 [2024-10-01 13:42:05.202462] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:13.403 [2024-10-01 13:42:05.202834] bdev_nvme.c:6389:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:13.403 request: 00:14:13.403 { 00:14:13.403 "name": "TLSTEST", 00:14:13.403 "trtype": "tcp", 00:14:13.403 "traddr": "10.0.0.3", 00:14:13.403 "adrfam": "ipv4", 00:14:13.403 "trsvcid": "4420", 00:14:13.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:13.403 "prchk_reftag": false, 00:14:13.403 "prchk_guard": false, 00:14:13.403 "hdgst": false, 00:14:13.403 "ddgst": false, 00:14:13.403 "psk": "key0", 00:14:13.403 "allow_unrecognized_csi": false, 00:14:13.403 "method": "bdev_nvme_attach_controller", 00:14:13.403 "req_id": 1 00:14:13.403 } 00:14:13.403 Got JSON-RPC error response 00:14:13.403 response: 00:14:13.403 { 00:14:13.403 "code": -126, 00:14:13.403 "message": "Required key not available" 00:14:13.403 } 00:14:13.403 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72097 00:14:13.403 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72097 ']' 00:14:13.403 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72097 00:14:13.403 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:13.403 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:13.403 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72097 00:14:13.403 killing process with pid 72097 00:14:13.403 Received shutdown signal, test time was about 10.000000 seconds 00:14:13.403 00:14:13.403 Latency(us) 00:14:13.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.403 =================================================================================================================== 00:14:13.403 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:13.403 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:13.403 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:13.403 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72097' 00:14:13.403 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72097 00:14:13.403 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72097 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71648 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71648 ']' 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71648 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71648 00:14:13.662 killing process with pid 71648 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71648' 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71648 00:14:13.662 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71648 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ifjnsatssg 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ifjnsatssg 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=72141 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 72141 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72141 ']' 00:14:13.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:13.922 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.922 [2024-10-01 13:42:05.734437] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:13.922 [2024-10-01 13:42:05.734732] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.181 [2024-10-01 13:42:05.868396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.181 [2024-10-01 13:42:05.925833] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.181 [2024-10-01 13:42:05.925891] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.181 [2024-10-01 13:42:05.925904] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.181 [2024-10-01 13:42:05.925912] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.181 [2024-10-01 13:42:05.925919] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.181 [2024-10-01 13:42:05.925960] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.181 [2024-10-01 13:42:05.955136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:15.117 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.117 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:15.117 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:15.117 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:15.117 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.117 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.117 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ifjnsatssg 00:14:15.117 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ifjnsatssg 00:14:15.117 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:15.376 [2024-10-01 13:42:07.089711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.376 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:15.635 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:15.896 [2024-10-01 13:42:07.657843] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:15.896 [2024-10-01 13:42:07.658066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:15.896 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:16.161 malloc0 00:14:16.161 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:16.420 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ifjnsatssg 00:14:16.678 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ifjnsatssg 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ifjnsatssg 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72202 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72202 /var/tmp/bdevperf.sock 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72202 ']' 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.246 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.246 [2024-10-01 13:42:08.850932] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:17.246 [2024-10-01 13:42:08.851226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72202 ] 00:14:17.246 [2024-10-01 13:42:09.004948] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.246 [2024-10-01 13:42:09.091226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.505 [2024-10-01 13:42:09.122813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.071 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.071 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:18.071 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ifjnsatssg 00:14:18.330 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:18.609 [2024-10-01 13:42:10.421656] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:18.891 TLSTESTn1 00:14:18.891 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:18.891 Running I/O for 10 seconds... 00:14:29.038 3928.00 IOPS, 15.34 MiB/s 3957.50 IOPS, 15.46 MiB/s 3982.33 IOPS, 15.56 MiB/s 3968.00 IOPS, 15.50 MiB/s 3944.20 IOPS, 15.41 MiB/s 3956.67 IOPS, 15.46 MiB/s 3965.86 IOPS, 15.49 MiB/s 3973.88 IOPS, 15.52 MiB/s 3975.78 IOPS, 15.53 MiB/s 3963.90 IOPS, 15.48 MiB/s 00:14:29.038 Latency(us) 00:14:29.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.038 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:29.039 Verification LBA range: start 0x0 length 0x2000 00:14:29.039 TLSTESTn1 : 10.02 3970.20 15.51 0.00 0.00 32182.52 5659.93 30742.34 00:14:29.039 =================================================================================================================== 00:14:29.039 Total : 3970.20 15.51 0.00 0.00 32182.52 5659.93 30742.34 00:14:29.039 { 00:14:29.039 "results": [ 00:14:29.039 { 00:14:29.039 "job": "TLSTESTn1", 00:14:29.039 "core_mask": "0x4", 00:14:29.039 "workload": "verify", 00:14:29.039 "status": "finished", 00:14:29.039 "verify_range": { 00:14:29.039 "start": 0, 00:14:29.039 "length": 8192 00:14:29.039 }, 00:14:29.039 "queue_depth": 128, 00:14:29.039 "io_size": 4096, 00:14:29.039 "runtime": 10.015867, 00:14:29.039 "iops": 3970.2004828937925, 00:14:29.039 "mibps": 15.508595636303877, 00:14:29.039 "io_failed": 0, 00:14:29.039 "io_timeout": 0, 00:14:29.039 "avg_latency_us": 32182.517199090107, 00:14:29.039 "min_latency_us": 5659.927272727273, 00:14:29.039 "max_latency_us": 30742.34181818182 00:14:29.039 } 00:14:29.039 ], 00:14:29.039 "core_count": 1 00:14:29.039 } 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72202 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72202 ']' 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72202 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72202 00:14:29.039 killing process with pid 72202 00:14:29.039 Received shutdown signal, test time was about 10.000000 seconds 00:14:29.039 00:14:29.039 Latency(us) 00:14:29.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.039 =================================================================================================================== 00:14:29.039 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72202' 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72202 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72202 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ifjnsatssg 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ifjnsatssg 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ifjnsatssg 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ifjnsatssg 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ifjnsatssg 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72334 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72334 /var/tmp/bdevperf.sock 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72334 ']' 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:29.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:29.039 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.298 [2024-10-01 13:42:20.934033] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:29.298 [2024-10-01 13:42:20.934139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72334 ] 00:14:29.298 [2024-10-01 13:42:21.075392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.298 [2024-10-01 13:42:21.147937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.555 [2024-10-01 13:42:21.182997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:29.555 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:29.556 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:29.556 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ifjnsatssg 00:14:29.813 [2024-10-01 13:42:21.544362] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ifjnsatssg': 0100666 00:14:29.813 [2024-10-01 13:42:21.544422] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:29.813 request: 00:14:29.813 { 00:14:29.813 "name": "key0", 00:14:29.813 "path": "/tmp/tmp.ifjnsatssg", 00:14:29.813 "method": "keyring_file_add_key", 00:14:29.813 "req_id": 1 00:14:29.813 } 00:14:29.813 Got JSON-RPC error response 00:14:29.813 response: 00:14:29.813 { 00:14:29.813 "code": -1, 00:14:29.813 "message": "Operation not permitted" 00:14:29.813 } 00:14:29.813 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:30.072 [2024-10-01 13:42:21.820525] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:30.072 [2024-10-01 13:42:21.820613] bdev_nvme.c:6389:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:30.072 request: 00:14:30.072 { 00:14:30.072 "name": "TLSTEST", 00:14:30.072 "trtype": "tcp", 00:14:30.072 "traddr": "10.0.0.3", 00:14:30.072 "adrfam": "ipv4", 00:14:30.072 "trsvcid": "4420", 00:14:30.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:30.072 "prchk_reftag": false, 00:14:30.072 "prchk_guard": false, 00:14:30.072 "hdgst": false, 00:14:30.072 "ddgst": false, 00:14:30.072 "psk": "key0", 00:14:30.072 "allow_unrecognized_csi": false, 00:14:30.072 "method": "bdev_nvme_attach_controller", 00:14:30.072 "req_id": 1 00:14:30.072 } 00:14:30.072 Got JSON-RPC error response 00:14:30.072 response: 00:14:30.072 { 00:14:30.072 "code": -126, 00:14:30.072 "message": "Required key not available" 00:14:30.072 } 00:14:30.072 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72334 00:14:30.072 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72334 ']' 00:14:30.072 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72334 00:14:30.072 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:30.072 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:30.072 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72334 00:14:30.072 killing process with pid 72334 00:14:30.072 Received shutdown signal, test time was about 10.000000 seconds 00:14:30.072 00:14:30.072 Latency(us) 00:14:30.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.072 =================================================================================================================== 00:14:30.072 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:30.072 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:30.072 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:30.072 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72334' 00:14:30.072 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72334 00:14:30.072 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72334 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72141 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72141 ']' 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72141 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72141 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72141' 00:14:30.331 killing process with pid 72141 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72141 00:14:30.331 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72141 00:14:30.589 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:14:30.589 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:30.589 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:30.589 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.589 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=72368 00:14:30.589 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 72368 00:14:30.589 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:30.589 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72368 ']' 00:14:30.589 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.589 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:30.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.589 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.589 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:30.589 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.589 [2024-10-01 13:42:22.293654] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:30.589 [2024-10-01 13:42:22.293751] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.589 [2024-10-01 13:42:22.428837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.848 [2024-10-01 13:42:22.493558] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.848 [2024-10-01 13:42:22.493620] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.848 [2024-10-01 13:42:22.493632] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.848 [2024-10-01 13:42:22.493641] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.848 [2024-10-01 13:42:22.493648] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.848 [2024-10-01 13:42:22.493676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.848 [2024-10-01 13:42:22.524290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ifjnsatssg 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ifjnsatssg 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.ifjnsatssg 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ifjnsatssg 00:14:30.848 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:31.106 [2024-10-01 13:42:22.880038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.106 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:31.364 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:31.622 [2024-10-01 13:42:23.448156] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:31.622 [2024-10-01 13:42:23.448402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:31.622 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:31.880 malloc0 00:14:31.880 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:32.138 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ifjnsatssg 00:14:32.704 [2024-10-01 13:42:24.269477] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ifjnsatssg': 0100666 00:14:32.704 [2024-10-01 13:42:24.269529] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:32.704 request: 00:14:32.704 { 00:14:32.704 "name": "key0", 00:14:32.704 "path": "/tmp/tmp.ifjnsatssg", 00:14:32.704 "method": "keyring_file_add_key", 00:14:32.704 "req_id": 1 00:14:32.704 } 00:14:32.704 Got JSON-RPC error response 00:14:32.704 response: 00:14:32.704 { 00:14:32.704 "code": -1, 00:14:32.704 "message": "Operation not permitted" 00:14:32.704 } 00:14:32.704 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:32.962 [2024-10-01 13:42:24.629604] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:14:32.962 [2024-10-01 13:42:24.629683] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:32.962 request: 00:14:32.962 { 00:14:32.962 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.962 "host": "nqn.2016-06.io.spdk:host1", 00:14:32.962 "psk": "key0", 00:14:32.962 "method": "nvmf_subsystem_add_host", 00:14:32.962 "req_id": 1 00:14:32.962 } 00:14:32.962 Got JSON-RPC error response 00:14:32.962 response: 00:14:32.962 { 00:14:32.962 "code": -32603, 00:14:32.962 "message": "Internal error" 00:14:32.962 } 00:14:32.962 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:32.962 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.962 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.962 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.962 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72368 00:14:32.962 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72368 ']' 00:14:32.962 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72368 00:14:32.962 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:32.962 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:32.962 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72368 00:14:32.962 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:32.962 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:32.962 killing process with pid 72368 00:14:32.962 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72368' 00:14:32.962 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72368 00:14:32.962 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72368 00:14:33.221 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ifjnsatssg 00:14:33.221 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:14:33.221 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:33.221 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:33.221 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.221 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=72425 00:14:33.221 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:33.221 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 72425 00:14:33.221 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72425 ']' 00:14:33.221 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.221 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:33.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.221 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.221 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:33.221 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.221 [2024-10-01 13:42:24.928233] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:33.221 [2024-10-01 13:42:24.928322] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.221 [2024-10-01 13:42:25.063698] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.481 [2024-10-01 13:42:25.133305] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.481 [2024-10-01 13:42:25.133374] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.481 [2024-10-01 13:42:25.133388] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.481 [2024-10-01 13:42:25.133398] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.482 [2024-10-01 13:42:25.133407] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.482 [2024-10-01 13:42:25.133442] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.482 [2024-10-01 13:42:25.166679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:33.482 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:33.482 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:33.482 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:33.482 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:33.482 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.482 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.482 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ifjnsatssg 00:14:33.482 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ifjnsatssg 00:14:33.482 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:33.741 [2024-10-01 13:42:25.508313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.741 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:34.000 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:34.259 [2024-10-01 13:42:26.076431] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:34.259 [2024-10-01 13:42:26.076674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:34.259 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:34.518 malloc0 00:14:34.518 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:34.776 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ifjnsatssg 00:14:35.035 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:35.294 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:35.294 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72479 00:14:35.552 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:35.552 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72479 /var/tmp/bdevperf.sock 00:14:35.552 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72479 ']' 00:14:35.552 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:35.552 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:35.552 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:35.552 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.552 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.552 [2024-10-01 13:42:27.218191] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:35.552 [2024-10-01 13:42:27.218319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72479 ] 00:14:35.553 [2024-10-01 13:42:27.369884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.811 [2024-10-01 13:42:27.457115] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.811 [2024-10-01 13:42:27.491542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:35.811 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:35.811 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:35.811 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ifjnsatssg 00:14:36.070 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:36.328 [2024-10-01 13:42:28.102277] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:36.328 TLSTESTn1 00:14:36.587 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:36.846 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:36.846 "subsystems": [ 00:14:36.846 { 00:14:36.846 "subsystem": "keyring", 00:14:36.846 "config": [ 00:14:36.846 { 00:14:36.846 "method": "keyring_file_add_key", 00:14:36.846 "params": { 00:14:36.846 "name": "key0", 00:14:36.846 "path": "/tmp/tmp.ifjnsatssg" 00:14:36.846 } 00:14:36.846 } 00:14:36.846 ] 00:14:36.846 }, 00:14:36.846 { 00:14:36.846 "subsystem": "iobuf", 00:14:36.846 "config": [ 00:14:36.846 { 00:14:36.846 "method": "iobuf_set_options", 00:14:36.846 "params": { 00:14:36.846 "small_pool_count": 8192, 00:14:36.846 "large_pool_count": 1024, 00:14:36.846 "small_bufsize": 8192, 00:14:36.846 "large_bufsize": 135168 00:14:36.846 } 00:14:36.846 } 00:14:36.846 ] 00:14:36.846 }, 00:14:36.846 { 00:14:36.846 "subsystem": "sock", 00:14:36.846 "config": [ 00:14:36.846 { 00:14:36.846 "method": "sock_set_default_impl", 00:14:36.846 "params": { 00:14:36.846 "impl_name": "uring" 00:14:36.846 } 00:14:36.846 }, 00:14:36.846 { 00:14:36.846 "method": "sock_impl_set_options", 00:14:36.846 "params": { 00:14:36.846 "impl_name": "ssl", 00:14:36.846 "recv_buf_size": 4096, 00:14:36.846 "send_buf_size": 4096, 00:14:36.846 "enable_recv_pipe": true, 00:14:36.846 "enable_quickack": false, 00:14:36.846 "enable_placement_id": 0, 00:14:36.846 "enable_zerocopy_send_server": true, 00:14:36.846 "enable_zerocopy_send_client": false, 00:14:36.846 "zerocopy_threshold": 0, 00:14:36.846 "tls_version": 0, 00:14:36.846 "enable_ktls": false 00:14:36.846 } 00:14:36.846 }, 00:14:36.846 { 00:14:36.846 "method": "sock_impl_set_options", 00:14:36.846 "params": { 00:14:36.846 "impl_name": "posix", 00:14:36.846 "recv_buf_size": 2097152, 00:14:36.846 "send_buf_size": 2097152, 00:14:36.846 "enable_recv_pipe": true, 00:14:36.846 "enable_quickack": false, 00:14:36.846 "enable_placement_id": 0, 00:14:36.846 "enable_zerocopy_send_server": true, 00:14:36.846 "enable_zerocopy_send_client": false, 00:14:36.846 "zerocopy_threshold": 0, 00:14:36.846 "tls_version": 0, 00:14:36.846 "enable_ktls": false 00:14:36.846 } 00:14:36.846 }, 00:14:36.846 { 00:14:36.846 "method": "sock_impl_set_options", 00:14:36.846 "params": { 00:14:36.846 "impl_name": "uring", 00:14:36.846 "recv_buf_size": 2097152, 00:14:36.846 "send_buf_size": 2097152, 00:14:36.846 "enable_recv_pipe": true, 00:14:36.846 "enable_quickack": false, 00:14:36.846 "enable_placement_id": 0, 00:14:36.846 "enable_zerocopy_send_server": false, 00:14:36.846 "enable_zerocopy_send_client": false, 00:14:36.846 "zerocopy_threshold": 0, 00:14:36.846 "tls_version": 0, 00:14:36.846 "enable_ktls": false 00:14:36.846 } 00:14:36.846 } 00:14:36.846 ] 00:14:36.846 }, 00:14:36.846 { 00:14:36.846 "subsystem": "vmd", 00:14:36.846 "config": [] 00:14:36.846 }, 00:14:36.846 { 00:14:36.846 "subsystem": "accel", 00:14:36.846 "config": [ 00:14:36.846 { 00:14:36.846 "method": "accel_set_options", 00:14:36.846 "params": { 00:14:36.846 "small_cache_size": 128, 00:14:36.846 "large_cache_size": 16, 00:14:36.846 "task_count": 2048, 00:14:36.846 "sequence_count": 2048, 00:14:36.846 "buf_count": 2048 00:14:36.846 } 00:14:36.846 } 00:14:36.846 ] 00:14:36.846 }, 00:14:36.846 { 00:14:36.846 "subsystem": "bdev", 00:14:36.846 "config": [ 00:14:36.846 { 00:14:36.846 "method": "bdev_set_options", 00:14:36.846 "params": { 00:14:36.846 "bdev_io_pool_size": 65535, 00:14:36.846 "bdev_io_cache_size": 256, 00:14:36.846 "bdev_auto_examine": true, 00:14:36.846 "iobuf_small_cache_size": 128, 00:14:36.846 "iobuf_large_cache_size": 16 00:14:36.846 } 00:14:36.846 }, 00:14:36.846 { 00:14:36.846 "method": "bdev_raid_set_options", 00:14:36.846 "params": { 00:14:36.846 "process_window_size_kb": 1024, 00:14:36.846 "process_max_bandwidth_mb_sec": 0 00:14:36.846 } 00:14:36.846 }, 00:14:36.846 { 00:14:36.846 "method": "bdev_iscsi_set_options", 00:14:36.846 "params": { 00:14:36.846 "timeout_sec": 30 00:14:36.846 } 00:14:36.846 }, 00:14:36.846 { 00:14:36.846 "method": "bdev_nvme_set_options", 00:14:36.846 "params": { 00:14:36.846 "action_on_timeout": "none", 00:14:36.846 "timeout_us": 0, 00:14:36.846 "timeout_admin_us": 0, 00:14:36.846 "keep_alive_timeout_ms": 10000, 00:14:36.846 "arbitration_burst": 0, 00:14:36.846 "low_priority_weight": 0, 00:14:36.846 "medium_priority_weight": 0, 00:14:36.846 "high_priority_weight": 0, 00:14:36.846 "nvme_adminq_poll_period_us": 10000, 00:14:36.846 "nvme_ioq_poll_period_us": 0, 00:14:36.846 "io_queue_requests": 0, 00:14:36.846 "delay_cmd_submit": true, 00:14:36.846 "transport_retry_count": 4, 00:14:36.846 "bdev_retry_count": 3, 00:14:36.846 "transport_ack_timeout": 0, 00:14:36.846 "ctrlr_loss_timeout_sec": 0, 00:14:36.846 "reconnect_delay_sec": 0, 00:14:36.846 "fast_io_fail_timeout_sec": 0, 00:14:36.846 "disable_auto_failback": false, 00:14:36.846 "generate_uuids": false, 00:14:36.846 "transport_tos": 0, 00:14:36.846 "nvme_error_stat": false, 00:14:36.846 "rdma_srq_size": 0, 00:14:36.846 "io_path_stat": false, 00:14:36.846 "allow_accel_sequence": false, 00:14:36.846 "rdma_max_cq_size": 0, 00:14:36.846 "rdma_cm_event_timeout_ms": 0, 00:14:36.846 "dhchap_digests": [ 00:14:36.847 "sha256", 00:14:36.847 "sha384", 00:14:36.847 "sha512" 00:14:36.847 ], 00:14:36.847 "dhchap_dhgroups": [ 00:14:36.847 "null", 00:14:36.847 "ffdhe2048", 00:14:36.847 "ffdhe3072", 00:14:36.847 "ffdhe4096", 00:14:36.847 "ffdhe6144", 00:14:36.847 "ffdhe8192" 00:14:36.847 ] 00:14:36.847 } 00:14:36.847 }, 00:14:36.847 { 00:14:36.847 "method": "bdev_nvme_set_hotplug", 00:14:36.847 "params": { 00:14:36.847 "period_us": 100000, 00:14:36.847 "enable": false 00:14:36.847 } 00:14:36.847 }, 00:14:36.847 { 00:14:36.847 "method": "bdev_malloc_create", 00:14:36.847 "params": { 00:14:36.847 "name": "malloc0", 00:14:36.847 "num_blocks": 8192, 00:14:36.847 "block_size": 4096, 00:14:36.847 "physical_block_size": 4096, 00:14:36.847 "uuid": "57145235-a7c1-4a19-aaed-2a335ec3e91a", 00:14:36.847 "optimal_io_boundary": 0, 00:14:36.847 "md_size": 0, 00:14:36.847 "dif_type": 0, 00:14:36.847 "dif_is_head_of_md": false, 00:14:36.847 "dif_pi_format": 0 00:14:36.847 } 00:14:36.847 }, 00:14:36.847 { 00:14:36.847 "method": "bdev_wait_for_examine" 00:14:36.847 } 00:14:36.847 ] 00:14:36.847 }, 00:14:36.847 { 00:14:36.847 "subsystem": "nbd", 00:14:36.847 "config": [] 00:14:36.847 }, 00:14:36.847 { 00:14:36.847 "subsystem": "scheduler", 00:14:36.847 "config": [ 00:14:36.847 { 00:14:36.847 "method": "framework_set_scheduler", 00:14:36.847 "params": { 00:14:36.847 "name": "static" 00:14:36.847 } 00:14:36.847 } 00:14:36.847 ] 00:14:36.847 }, 00:14:36.847 { 00:14:36.847 "subsystem": "nvmf", 00:14:36.847 "config": [ 00:14:36.847 { 00:14:36.847 "method": "nvmf_set_config", 00:14:36.847 "params": { 00:14:36.847 "discovery_filter": "match_any", 00:14:36.847 "admin_cmd_passthru": { 00:14:36.847 "identify_ctrlr": false 00:14:36.847 }, 00:14:36.847 "dhchap_digests": [ 00:14:36.847 "sha256", 00:14:36.847 "sha384", 00:14:36.847 "sha512" 00:14:36.847 ], 00:14:36.847 "dhchap_dhgroups": [ 00:14:36.847 "null", 00:14:36.847 "ffdhe2048", 00:14:36.847 "ffdhe3072", 00:14:36.847 "ffdhe4096", 00:14:36.847 "ffdhe6144", 00:14:36.847 "ffdhe8192" 00:14:36.847 ] 00:14:36.847 } 00:14:36.847 }, 00:14:36.847 { 00:14:36.847 "method": "nvmf_set_max_subsystems", 00:14:36.847 "params": { 00:14:36.847 "max_subsystems": 1024 00:14:36.847 } 00:14:36.847 }, 00:14:36.847 { 00:14:36.847 "method": "nvmf_set_crdt", 00:14:36.847 "params": { 00:14:36.847 "crdt1": 0, 00:14:36.847 "crdt2": 0, 00:14:36.847 "crdt3": 0 00:14:36.847 } 00:14:36.847 }, 00:14:36.847 { 00:14:36.847 "method": "nvmf_create_transport", 00:14:36.847 "params": { 00:14:36.847 "trtype": "TCP", 00:14:36.847 "max_queue_depth": 128, 00:14:36.847 "max_io_qpairs_per_ctrlr": 127, 00:14:36.847 "in_capsule_data_size": 4096, 00:14:36.847 "max_io_size": 131072, 00:14:36.847 "io_unit_size": 131072, 00:14:36.847 "max_aq_depth": 128, 00:14:36.847 "num_shared_buffers": 511, 00:14:36.847 "buf_cache_size": 4294967295, 00:14:36.847 "dif_insert_or_strip": false, 00:14:36.847 "zcopy": false, 00:14:36.847 "c2h_success": false, 00:14:36.847 "sock_priority": 0, 00:14:36.847 "abort_timeout_sec": 1, 00:14:36.847 "ack_timeout": 0, 00:14:36.847 "data_wr_pool_size": 0 00:14:36.847 } 00:14:36.847 }, 00:14:36.847 { 00:14:36.847 "method": "nvmf_create_subsystem", 00:14:36.847 "params": { 00:14:36.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.847 "allow_any_host": false, 00:14:36.847 "serial_number": "SPDK00000000000001", 00:14:36.847 "model_number": "SPDK bdev Controller", 00:14:36.847 "max_namespaces": 10, 00:14:36.847 "min_cntlid": 1, 00:14:36.847 "max_cntlid": 65519, 00:14:36.847 "ana_reporting": false 00:14:36.847 } 00:14:36.847 }, 00:14:36.847 { 00:14:36.847 "method": "nvmf_subsystem_add_host", 00:14:36.847 "params": { 00:14:36.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.847 "host": "nqn.2016-06.io.spdk:host1", 00:14:36.847 "psk": "key0" 00:14:36.847 } 00:14:36.847 }, 00:14:36.847 { 00:14:36.847 "method": "nvmf_subsystem_add_ns", 00:14:36.847 "params": { 00:14:36.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.847 "namespace": { 00:14:36.847 "nsid": 1, 00:14:36.847 "bdev_name": "malloc0", 00:14:36.847 "nguid": "57145235A7C14A19AAED2A335EC3E91A", 00:14:36.847 "uuid": "57145235-a7c1-4a19-aaed-2a335ec3e91a", 00:14:36.847 "no_auto_visible": false 00:14:36.847 } 00:14:36.847 } 00:14:36.847 }, 00:14:36.847 { 00:14:36.847 "method": "nvmf_subsystem_add_listener", 00:14:36.847 "params": { 00:14:36.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.847 "listen_address": { 00:14:36.847 "trtype": "TCP", 00:14:36.847 "adrfam": "IPv4", 00:14:36.847 "traddr": "10.0.0.3", 00:14:36.847 "trsvcid": "4420" 00:14:36.847 }, 00:14:36.847 "secure_channel": true 00:14:36.847 } 00:14:36.847 } 00:14:36.847 ] 00:14:36.847 } 00:14:36.847 ] 00:14:36.847 }' 00:14:36.847 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:37.414 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:37.414 "subsystems": [ 00:14:37.414 { 00:14:37.414 "subsystem": "keyring", 00:14:37.414 "config": [ 00:14:37.414 { 00:14:37.414 "method": "keyring_file_add_key", 00:14:37.414 "params": { 00:14:37.414 "name": "key0", 00:14:37.414 "path": "/tmp/tmp.ifjnsatssg" 00:14:37.414 } 00:14:37.414 } 00:14:37.414 ] 00:14:37.414 }, 00:14:37.414 { 00:14:37.414 "subsystem": "iobuf", 00:14:37.414 "config": [ 00:14:37.414 { 00:14:37.414 "method": "iobuf_set_options", 00:14:37.414 "params": { 00:14:37.414 "small_pool_count": 8192, 00:14:37.414 "large_pool_count": 1024, 00:14:37.414 "small_bufsize": 8192, 00:14:37.414 "large_bufsize": 135168 00:14:37.414 } 00:14:37.414 } 00:14:37.414 ] 00:14:37.414 }, 00:14:37.414 { 00:14:37.414 "subsystem": "sock", 00:14:37.414 "config": [ 00:14:37.414 { 00:14:37.414 "method": "sock_set_default_impl", 00:14:37.414 "params": { 00:14:37.414 "impl_name": "uring" 00:14:37.414 } 00:14:37.414 }, 00:14:37.414 { 00:14:37.414 "method": "sock_impl_set_options", 00:14:37.414 "params": { 00:14:37.414 "impl_name": "ssl", 00:14:37.414 "recv_buf_size": 4096, 00:14:37.414 "send_buf_size": 4096, 00:14:37.414 "enable_recv_pipe": true, 00:14:37.414 "enable_quickack": false, 00:14:37.414 "enable_placement_id": 0, 00:14:37.414 "enable_zerocopy_send_server": true, 00:14:37.414 "enable_zerocopy_send_client": false, 00:14:37.414 "zerocopy_threshold": 0, 00:14:37.414 "tls_version": 0, 00:14:37.414 "enable_ktls": false 00:14:37.414 } 00:14:37.414 }, 00:14:37.414 { 00:14:37.414 "method": "sock_impl_set_options", 00:14:37.414 "params": { 00:14:37.414 "impl_name": "posix", 00:14:37.414 "recv_buf_size": 2097152, 00:14:37.414 "send_buf_size": 2097152, 00:14:37.414 "enable_recv_pipe": true, 00:14:37.414 "enable_quickack": false, 00:14:37.414 "enable_placement_id": 0, 00:14:37.414 "enable_zerocopy_send_server": true, 00:14:37.414 "enable_zerocopy_send_client": false, 00:14:37.414 "zerocopy_threshold": 0, 00:14:37.414 "tls_version": 0, 00:14:37.414 "enable_ktls": false 00:14:37.414 } 00:14:37.414 }, 00:14:37.414 { 00:14:37.414 "method": "sock_impl_set_options", 00:14:37.414 "params": { 00:14:37.414 "impl_name": "uring", 00:14:37.414 "recv_buf_size": 2097152, 00:14:37.414 "send_buf_size": 2097152, 00:14:37.414 "enable_recv_pipe": true, 00:14:37.414 "enable_quickack": false, 00:14:37.414 "enable_placement_id": 0, 00:14:37.414 "enable_zerocopy_send_server": false, 00:14:37.414 "enable_zerocopy_send_client": false, 00:14:37.414 "zerocopy_threshold": 0, 00:14:37.414 "tls_version": 0, 00:14:37.414 "enable_ktls": false 00:14:37.414 } 00:14:37.414 } 00:14:37.414 ] 00:14:37.414 }, 00:14:37.414 { 00:14:37.414 "subsystem": "vmd", 00:14:37.414 "config": [] 00:14:37.414 }, 00:14:37.414 { 00:14:37.414 "subsystem": "accel", 00:14:37.414 "config": [ 00:14:37.414 { 00:14:37.415 "method": "accel_set_options", 00:14:37.415 "params": { 00:14:37.415 "small_cache_size": 128, 00:14:37.415 "large_cache_size": 16, 00:14:37.415 "task_count": 2048, 00:14:37.415 "sequence_count": 2048, 00:14:37.415 "buf_count": 2048 00:14:37.415 } 00:14:37.415 } 00:14:37.415 ] 00:14:37.415 }, 00:14:37.415 { 00:14:37.415 "subsystem": "bdev", 00:14:37.415 "config": [ 00:14:37.415 { 00:14:37.415 "method": "bdev_set_options", 00:14:37.415 "params": { 00:14:37.415 "bdev_io_pool_size": 65535, 00:14:37.415 "bdev_io_cache_size": 256, 00:14:37.415 "bdev_auto_examine": true, 00:14:37.415 "iobuf_small_cache_size": 128, 00:14:37.415 "iobuf_large_cache_size": 16 00:14:37.415 } 00:14:37.415 }, 00:14:37.415 { 00:14:37.415 "method": "bdev_raid_set_options", 00:14:37.415 "params": { 00:14:37.415 "process_window_size_kb": 1024, 00:14:37.415 "process_max_bandwidth_mb_sec": 0 00:14:37.415 } 00:14:37.415 }, 00:14:37.415 { 00:14:37.415 "method": "bdev_iscsi_set_options", 00:14:37.415 "params": { 00:14:37.415 "timeout_sec": 30 00:14:37.415 } 00:14:37.415 }, 00:14:37.415 { 00:14:37.415 "method": "bdev_nvme_set_options", 00:14:37.415 "params": { 00:14:37.415 "action_on_timeout": "none", 00:14:37.415 "timeout_us": 0, 00:14:37.415 "timeout_admin_us": 0, 00:14:37.415 "keep_alive_timeout_ms": 10000, 00:14:37.415 "arbitration_burst": 0, 00:14:37.415 "low_priority_weight": 0, 00:14:37.415 "medium_priority_weight": 0, 00:14:37.415 "high_priority_weight": 0, 00:14:37.415 "nvme_adminq_poll_period_us": 10000, 00:14:37.415 "nvme_ioq_poll_period_us": 0, 00:14:37.415 "io_queue_requests": 512, 00:14:37.415 "delay_cmd_submit": true, 00:14:37.415 "transport_retry_count": 4, 00:14:37.415 "bdev_retry_count": 3, 00:14:37.415 "transport_ack_timeout": 0, 00:14:37.415 "ctrlr_loss_timeout_sec": 0, 00:14:37.415 "reconnect_delay_sec": 0, 00:14:37.415 "fast_io_fail_timeout_sec": 0, 00:14:37.415 "disable_auto_failback": false, 00:14:37.415 "generate_uuids": false, 00:14:37.415 "transport_tos": 0, 00:14:37.415 "nvme_error_stat": false, 00:14:37.415 "rdma_srq_size": 0, 00:14:37.415 "io_path_stat": false, 00:14:37.415 "allow_accel_sequence": false, 00:14:37.415 "rdma_max_cq_size": 0, 00:14:37.415 "rdma_cm_event_timeout_ms": 0, 00:14:37.415 "dhchap_digests": [ 00:14:37.415 "sha256", 00:14:37.415 "sha384", 00:14:37.415 "sha512" 00:14:37.415 ], 00:14:37.415 "dhchap_dhgroups": [ 00:14:37.415 "null", 00:14:37.415 "ffdhe2048", 00:14:37.415 "ffdhe3072", 00:14:37.415 "ffdhe4096", 00:14:37.415 "ffdhe6144", 00:14:37.415 "ffdhe8192" 00:14:37.415 ] 00:14:37.415 } 00:14:37.415 }, 00:14:37.415 { 00:14:37.415 "method": "bdev_nvme_attach_controller", 00:14:37.415 "params": { 00:14:37.415 "name": "TLSTEST", 00:14:37.415 "trtype": "TCP", 00:14:37.415 "adrfam": "IPv4", 00:14:37.415 "traddr": "10.0.0.3", 00:14:37.415 "trsvcid": "4420", 00:14:37.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.415 "prchk_reftag": false, 00:14:37.415 "prchk_guard": false, 00:14:37.415 "ctrlr_loss_timeout_sec": 0, 00:14:37.415 "reconnect_delay_sec": 0, 00:14:37.415 "fast_io_fail_timeout_sec": 0, 00:14:37.415 "psk": "key0", 00:14:37.415 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:37.415 "hdgst": false, 00:14:37.415 "ddgst": false, 00:14:37.415 "multipath": "multipath" 00:14:37.415 } 00:14:37.415 }, 00:14:37.415 { 00:14:37.415 "method": "bdev_nvme_set_hotplug", 00:14:37.415 "params": { 00:14:37.415 "period_us": 100000, 00:14:37.415 "enable": false 00:14:37.415 } 00:14:37.415 }, 00:14:37.415 { 00:14:37.415 "method": "bdev_wait_for_examine" 00:14:37.415 } 00:14:37.415 ] 00:14:37.415 }, 00:14:37.415 { 00:14:37.415 "subsystem": "nbd", 00:14:37.415 "config": [] 00:14:37.415 } 00:14:37.415 ] 00:14:37.415 }' 00:14:37.415 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72479 00:14:37.415 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72479 ']' 00:14:37.415 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72479 00:14:37.415 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:37.415 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.415 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72479 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:37.415 killing process with pid 72479 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72479' 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72479 00:14:37.415 Received shutdown signal, test time was about 10.000000 seconds 00:14:37.415 00:14:37.415 Latency(us) 00:14:37.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.415 =================================================================================================================== 00:14:37.415 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72479 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72425 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72425 ']' 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72425 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72425 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:37.415 killing process with pid 72425 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72425' 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72425 00:14:37.415 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72425 00:14:37.673 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:37.673 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:37.673 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:37.673 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.674 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:37.674 "subsystems": [ 00:14:37.674 { 00:14:37.674 "subsystem": "keyring", 00:14:37.674 "config": [ 00:14:37.674 { 00:14:37.674 "method": "keyring_file_add_key", 00:14:37.674 "params": { 00:14:37.674 "name": "key0", 00:14:37.674 "path": "/tmp/tmp.ifjnsatssg" 00:14:37.674 } 00:14:37.674 } 00:14:37.674 ] 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "subsystem": "iobuf", 00:14:37.674 "config": [ 00:14:37.674 { 00:14:37.674 "method": "iobuf_set_options", 00:14:37.674 "params": { 00:14:37.674 "small_pool_count": 8192, 00:14:37.674 "large_pool_count": 1024, 00:14:37.674 "small_bufsize": 8192, 00:14:37.674 "large_bufsize": 135168 00:14:37.674 } 00:14:37.674 } 00:14:37.674 ] 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "subsystem": "sock", 00:14:37.674 "config": [ 00:14:37.674 { 00:14:37.674 "method": "sock_set_default_impl", 00:14:37.674 "params": { 00:14:37.674 "impl_name": "uring" 00:14:37.674 } 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "method": "sock_impl_set_options", 00:14:37.674 "params": { 00:14:37.674 "impl_name": "ssl", 00:14:37.674 "recv_buf_size": 4096, 00:14:37.674 "send_buf_size": 4096, 00:14:37.674 "enable_recv_pipe": true, 00:14:37.674 "enable_quickack": false, 00:14:37.674 "enable_placement_id": 0, 00:14:37.674 "enable_zerocopy_send_server": true, 00:14:37.674 "enable_zerocopy_send_client": false, 00:14:37.674 "zerocopy_threshold": 0, 00:14:37.674 "tls_version": 0, 00:14:37.674 "enable_ktls": false 00:14:37.674 } 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "method": "sock_impl_set_options", 00:14:37.674 "params": { 00:14:37.674 "impl_name": "posix", 00:14:37.674 "recv_buf_size": 2097152, 00:14:37.674 "send_buf_size": 2097152, 00:14:37.674 "enable_recv_pipe": true, 00:14:37.674 "enable_quickack": false, 00:14:37.674 "enable_placement_id": 0, 00:14:37.674 "enable_zerocopy_send_server": true, 00:14:37.674 "enable_zerocopy_send_client": false, 00:14:37.674 "zerocopy_threshold": 0, 00:14:37.674 "tls_version": 0, 00:14:37.674 "enable_ktls": false 00:14:37.674 } 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "method": "sock_impl_set_options", 00:14:37.674 "params": { 00:14:37.674 "impl_name": "uring", 00:14:37.674 "recv_buf_size": 2097152, 00:14:37.674 "send_buf_size": 2097152, 00:14:37.674 "enable_recv_pipe": true, 00:14:37.674 "enable_quickack": false, 00:14:37.674 "enable_placement_id": 0, 00:14:37.674 "enable_zerocopy_send_server": false, 00:14:37.674 "enable_zerocopy_send_client": false, 00:14:37.674 "zerocopy_threshold": 0, 00:14:37.674 "tls_version": 0, 00:14:37.674 "enable_ktls": false 00:14:37.674 } 00:14:37.674 } 00:14:37.674 ] 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "subsystem": "vmd", 00:14:37.674 "config": [] 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "subsystem": "accel", 00:14:37.674 "config": [ 00:14:37.674 { 00:14:37.674 "method": "accel_set_options", 00:14:37.674 "params": { 00:14:37.674 "small_cache_size": 128, 00:14:37.674 "large_cache_size": 16, 00:14:37.674 "task_count": 2048, 00:14:37.674 "sequence_count": 2048, 00:14:37.674 "buf_count": 2048 00:14:37.674 } 00:14:37.674 } 00:14:37.674 ] 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "subsystem": "bdev", 00:14:37.674 "config": [ 00:14:37.674 { 00:14:37.674 "method": "bdev_set_options", 00:14:37.674 "params": { 00:14:37.674 "bdev_io_pool_size": 65535, 00:14:37.674 "bdev_io_cache_size": 256, 00:14:37.674 "bdev_auto_examine": true, 00:14:37.674 "iobuf_small_cache_size": 128, 00:14:37.674 "iobuf_large_cache_size": 16 00:14:37.674 } 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "method": "bdev_raid_set_options", 00:14:37.674 "params": { 00:14:37.674 "process_window_size_kb": 1024, 00:14:37.674 "process_max_bandwidth_mb_sec": 0 00:14:37.674 } 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "method": "bdev_iscsi_set_options", 00:14:37.674 "params": { 00:14:37.674 "timeout_sec": 30 00:14:37.674 } 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "method": "bdev_nvme_set_options", 00:14:37.674 "params": { 00:14:37.674 "action_on_timeout": "none", 00:14:37.674 "timeout_us": 0, 00:14:37.674 "timeout_admin_us": 0, 00:14:37.674 "keep_alive_timeout_ms": 10000, 00:14:37.674 "arbitration_burst": 0, 00:14:37.674 "low_priority_weight": 0, 00:14:37.674 "medium_priority_weight": 0, 00:14:37.674 "high_priority_weight": 0, 00:14:37.674 "nvme_adminq_poll_period_us": 10000, 00:14:37.674 "nvme_ioq_poll_period_us": 0, 00:14:37.674 "io_queue_requests": 0, 00:14:37.674 "delay_cmd_submit": true, 00:14:37.674 "transport_retry_count": 4, 00:14:37.674 "bdev_retry_count": 3, 00:14:37.674 "transport_ack_timeout": 0, 00:14:37.674 "ctrlr_loss_timeout_sec": 0, 00:14:37.674 "reconnect_delay_sec": 0, 00:14:37.674 "fast_io_fail_timeout_sec": 0, 00:14:37.674 "disable_auto_failback": false, 00:14:37.674 "generate_uuids": false, 00:14:37.674 "transport_tos": 0, 00:14:37.674 "nvme_error_stat": false, 00:14:37.674 "rdma_srq_size": 0, 00:14:37.674 "io_path_stat": false, 00:14:37.674 "allow_accel_sequence": false, 00:14:37.674 "rdma_max_cq_size": 0, 00:14:37.674 "rdma_cm_event_timeout_ms": 0, 00:14:37.674 "dhchap_digests": [ 00:14:37.674 "sha256", 00:14:37.674 "sha384", 00:14:37.674 "sha512" 00:14:37.674 ], 00:14:37.674 "dhchap_dhgroups": [ 00:14:37.674 "null", 00:14:37.674 "ffdhe2048", 00:14:37.674 "ffdhe3072", 00:14:37.674 "ffdhe4096", 00:14:37.674 "ffdhe6144", 00:14:37.674 "ffdhe8192" 00:14:37.674 ] 00:14:37.674 } 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "method": "bdev_nvme_set_hotplug", 00:14:37.674 "params": { 00:14:37.674 "period_us": 100000, 00:14:37.674 "enable": false 00:14:37.674 } 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "method": "bdev_malloc_create", 00:14:37.674 "params": { 00:14:37.674 "name": "malloc0", 00:14:37.674 "num_blocks": 8192, 00:14:37.674 "block_size": 4096, 00:14:37.674 "physical_block_size": 4096, 00:14:37.674 "uuid": "57145235-a7c1-4a19-aaed-2a335ec3e91a", 00:14:37.674 "optimal_io_boundary": 0, 00:14:37.674 "md_size": 0, 00:14:37.674 "dif_type": 0, 00:14:37.674 "dif_is_head_of_md": false, 00:14:37.674 "dif_pi_format": 0 00:14:37.674 } 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "method": "bdev_wait_for_examine" 00:14:37.674 } 00:14:37.674 ] 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "subsystem": "nbd", 00:14:37.674 "config": [] 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "subsystem": "scheduler", 00:14:37.674 "config": [ 00:14:37.674 { 00:14:37.674 "method": "framework_set_scheduler", 00:14:37.674 "params": { 00:14:37.674 "name": "static" 00:14:37.674 } 00:14:37.674 } 00:14:37.674 ] 00:14:37.674 }, 00:14:37.674 { 00:14:37.674 "subsystem": "nvmf", 00:14:37.674 "config": [ 00:14:37.674 { 00:14:37.674 "method": "nvmf_set_config", 00:14:37.674 "params": { 00:14:37.674 "discovery_filter": "match_any", 00:14:37.674 "admin_cmd_passthru": { 00:14:37.674 "identify_ctrlr": false 00:14:37.674 }, 00:14:37.674 "dhchap_digests": [ 00:14:37.674 "sha256", 00:14:37.674 "sha384", 00:14:37.674 "sha512" 00:14:37.674 ], 00:14:37.674 "dhchap_dhgroups": [ 00:14:37.674 "null", 00:14:37.675 "ffdhe2048", 00:14:37.675 "ffdhe3072", 00:14:37.675 "ffdhe4096", 00:14:37.675 "ffdhe6144", 00:14:37.675 "ffdhe8192" 00:14:37.675 ] 00:14:37.675 } 00:14:37.675 }, 00:14:37.675 { 00:14:37.675 "method": "nvmf_set_max_subsystems", 00:14:37.675 "params": { 00:14:37.675 "max_subsystems": 1024 00:14:37.675 } 00:14:37.675 }, 00:14:37.675 { 00:14:37.675 "method": "nvmf_set_crdt", 00:14:37.675 "params": { 00:14:37.675 "crdt1": 0, 00:14:37.675 "crdt2": 0, 00:14:37.675 "crdt3": 0 00:14:37.675 } 00:14:37.675 }, 00:14:37.675 { 00:14:37.675 "method": "nvmf_create_transport", 00:14:37.675 "params": { 00:14:37.675 "trtype": "TCP", 00:14:37.675 "max_queue_depth": 128, 00:14:37.675 "max_io_qpairs_per_ctrlr": 127, 00:14:37.675 "in_capsule_data_size": 4096, 00:14:37.675 "max_io_size": 131072, 00:14:37.675 "io_unit_size": 131072, 00:14:37.675 "max_aq_depth": 128, 00:14:37.675 "num_shared_buffers": 511, 00:14:37.675 "buf_cache_size": 4294967295, 00:14:37.675 "dif_insert_or_strip": false, 00:14:37.675 "zcopy": false, 00:14:37.675 "c2h_success": false, 00:14:37.675 "sock_priority": 0, 00:14:37.675 "abort_timeout_sec": 1, 00:14:37.675 "ack_timeout": 0, 00:14:37.675 "data_wr_pool_size": 0 00:14:37.675 } 00:14:37.675 }, 00:14:37.675 { 00:14:37.675 "method": "nvmf_create_subsystem", 00:14:37.675 "params": { 00:14:37.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.675 "allow_any_host": false, 00:14:37.675 "serial_number": "SPDK00000000000001", 00:14:37.675 "model_number": "SPDK bdev Controller", 00:14:37.675 "max_namespaces": 10, 00:14:37.675 "min_cntlid": 1, 00:14:37.675 "max_cntlid": 65519, 00:14:37.675 "ana_reporting": false 00:14:37.675 } 00:14:37.675 }, 00:14:37.675 { 00:14:37.675 "method": "nvmf_subsystem_add_host", 00:14:37.675 "params": { 00:14:37.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.675 "host": "nqn.2016-06.io.spdk:host1", 00:14:37.675 "psk": "key0" 00:14:37.675 } 00:14:37.675 }, 00:14:37.675 { 00:14:37.675 "method": "nvmf_subsystem_add_ns", 00:14:37.675 "params": { 00:14:37.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.675 "namespace": { 00:14:37.675 "nsid": 1, 00:14:37.675 "bdev_name": "malloc0", 00:14:37.675 "nguid": "57145235A7C14A19AAED2A335EC3E91A", 00:14:37.675 "uuid": "57145235-a7c1-4a19-aaed-2a335ec3e91a", 00:14:37.675 "no_auto_visible": false 00:14:37.675 } 00:14:37.675 } 00:14:37.675 }, 00:14:37.675 { 00:14:37.675 "method": "nvmf_subsystem_add_listener", 00:14:37.675 "params": { 00:14:37.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.675 "listen_address": { 00:14:37.675 "trtype": "TCP", 00:14:37.675 "adrfam": "IPv4", 00:14:37.675 "traddr": "10.0.0.3", 00:14:37.675 "trsvcid": "4420" 00:14:37.675 }, 00:14:37.675 "secure_channel": true 00:14:37.675 } 00:14:37.675 } 00:14:37.675 ] 00:14:37.675 } 00:14:37.675 ] 00:14:37.675 }' 00:14:37.675 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=72521 00:14:37.675 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:37.675 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 72521 00:14:37.675 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72521 ']' 00:14:37.675 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.675 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:37.675 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.675 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:37.675 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.675 [2024-10-01 13:42:29.465968] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:37.675 [2024-10-01 13:42:29.466064] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.986 [2024-10-01 13:42:29.603095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.986 [2024-10-01 13:42:29.661118] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.986 [2024-10-01 13:42:29.661172] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.986 [2024-10-01 13:42:29.661183] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.986 [2024-10-01 13:42:29.661191] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.986 [2024-10-01 13:42:29.661199] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.986 [2024-10-01 13:42:29.661291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.986 [2024-10-01 13:42:29.805428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:38.261 [2024-10-01 13:42:29.864136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.261 [2024-10-01 13:42:29.903246] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:38.261 [2024-10-01 13:42:29.903499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:38.828 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:38.828 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:38.828 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:38.828 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:38.828 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.828 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:38.828 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72553 00:14:38.828 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72553 /var/tmp/bdevperf.sock 00:14:38.828 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72553 ']' 00:14:38.828 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:38.828 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:38.828 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.828 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:38.828 "subsystems": [ 00:14:38.828 { 00:14:38.828 "subsystem": "keyring", 00:14:38.828 "config": [ 00:14:38.828 { 00:14:38.828 "method": "keyring_file_add_key", 00:14:38.828 "params": { 00:14:38.828 "name": "key0", 00:14:38.828 "path": "/tmp/tmp.ifjnsatssg" 00:14:38.828 } 00:14:38.828 } 00:14:38.828 ] 00:14:38.828 }, 00:14:38.828 { 00:14:38.828 "subsystem": "iobuf", 00:14:38.828 "config": [ 00:14:38.828 { 00:14:38.828 "method": "iobuf_set_options", 00:14:38.828 "params": { 00:14:38.828 "small_pool_count": 8192, 00:14:38.828 "large_pool_count": 1024, 00:14:38.828 "small_bufsize": 8192, 00:14:38.828 "large_bufsize": 135168 00:14:38.828 } 00:14:38.828 } 00:14:38.828 ] 00:14:38.828 }, 00:14:38.828 { 00:14:38.828 "subsystem": "sock", 00:14:38.828 "config": [ 00:14:38.828 { 00:14:38.828 "method": "sock_set_default_impl", 00:14:38.828 "params": { 00:14:38.828 "impl_name": "uring" 00:14:38.828 } 00:14:38.828 }, 00:14:38.828 { 00:14:38.828 "method": "sock_impl_set_options", 00:14:38.828 "params": { 00:14:38.828 "impl_name": "ssl", 00:14:38.828 "recv_buf_size": 4096, 00:14:38.828 "send_buf_size": 4096, 00:14:38.828 "enable_recv_pipe": true, 00:14:38.828 "enable_quickack": false, 00:14:38.828 "enable_placement_id": 0, 00:14:38.828 "enable_zerocopy_send_server": true, 00:14:38.828 "enable_zerocopy_send_client": false, 00:14:38.828 "zerocopy_threshold": 0, 00:14:38.828 "tls_version": 0, 00:14:38.828 "enable_ktls": false 00:14:38.828 } 00:14:38.828 }, 00:14:38.828 { 00:14:38.828 "method": "sock_impl_set_options", 00:14:38.828 "params": { 00:14:38.828 "impl_name": "posix", 00:14:38.828 "recv_buf_size": 2097152, 00:14:38.828 "send_buf_size": 2097152, 00:14:38.828 "enable_recv_pipe": true, 00:14:38.828 "enable_quickack": false, 00:14:38.828 "enable_placement_id": 0, 00:14:38.828 "enable_zerocopy_send_server": true, 00:14:38.828 "enable_zerocopy_send_client": false, 00:14:38.828 "zerocopy_threshold": 0, 00:14:38.828 "tls_version": 0, 00:14:38.828 "enable_ktls": false 00:14:38.828 } 00:14:38.828 }, 00:14:38.828 { 00:14:38.828 "method": "sock_impl_set_options", 00:14:38.828 "params": { 00:14:38.828 "impl_name": "uring", 00:14:38.828 "recv_buf_size": 2097152, 00:14:38.828 "send_buf_size": 2097152, 00:14:38.828 "enable_recv_pipe": true, 00:14:38.828 "enable_quickack": false, 00:14:38.828 "enable_placement_id": 0, 00:14:38.828 "enable_zerocopy_send_server": false, 00:14:38.828 "enable_zerocopy_send_client": false, 00:14:38.828 "zerocopy_threshold": 0, 00:14:38.828 "tls_version": 0, 00:14:38.828 "enable_ktls": false 00:14:38.828 } 00:14:38.828 } 00:14:38.828 ] 00:14:38.828 }, 00:14:38.828 { 00:14:38.828 "subsystem": "vmd", 00:14:38.828 "config": [] 00:14:38.828 }, 00:14:38.828 { 00:14:38.828 "subsystem": "accel", 00:14:38.828 "config": [ 00:14:38.828 { 00:14:38.828 "method": "accel_set_options", 00:14:38.828 "params": { 00:14:38.828 "small_cache_size": 128, 00:14:38.828 "large_cache_size": 16, 00:14:38.828 "task_count": 2048, 00:14:38.828 "sequence_count": 2048, 00:14:38.828 "buf_count": 2048 00:14:38.828 } 00:14:38.828 } 00:14:38.828 ] 00:14:38.828 }, 00:14:38.828 { 00:14:38.828 "subsystem": "bdev", 00:14:38.828 "config": [ 00:14:38.828 { 00:14:38.828 "method": "bdev_set_options", 00:14:38.828 "params": { 00:14:38.828 "bdev_io_pool_size": 65535, 00:14:38.828 "bdev_io_cache_size": 256, 00:14:38.828 "bdev_auto_examine": true, 00:14:38.828 "iobuf_small_cache_size": 128, 00:14:38.828 "iobuf_large_cache_size": 16 00:14:38.828 } 00:14:38.828 }, 00:14:38.828 { 00:14:38.828 "method": "bdev_raid_set_options", 00:14:38.828 "params": { 00:14:38.828 "process_window_size_kb": 1024, 00:14:38.828 "process_max_bandwidth_mb_sec": 0 00:14:38.828 } 00:14:38.828 }, 00:14:38.828 { 00:14:38.828 "method": "bdev_iscsi_set_options", 00:14:38.828 "params": { 00:14:38.828 "timeout_sec": 30 00:14:38.828 } 00:14:38.828 }, 00:14:38.828 { 00:14:38.828 "method": "bdev_nvme_set_options", 00:14:38.828 "params": { 00:14:38.828 "action_on_timeout": "none", 00:14:38.828 "timeout_us": 0, 00:14:38.828 "timeout_admin_us": 0, 00:14:38.828 "keep_alive_timeout_ms": 10000, 00:14:38.828 "arbitration_burst": 0, 00:14:38.828 "low_priority_weight": 0, 00:14:38.828 "medium_priority_weight": 0, 00:14:38.828 "high_priority_weight": 0, 00:14:38.828 "nvme_adminq_poll_period_us": 10000, 00:14:38.828 "nvme_ioq_poll_period_us": 0, 00:14:38.828 "io_queue_requests": 512, 00:14:38.829 "delay_cmd_submit": true, 00:14:38.829 "transport_retry_count": 4, 00:14:38.829 "bdev_retry_count": 3, 00:14:38.829 "transport_ack_timeout": 0, 00:14:38.829 "ctrlr_loss_timeout_sec": 0, 00:14:38.829 "reconnect_delay_sec": 0, 00:14:38.829 "fast_io_fail_timeout_sec": 0, 00:14:38.829 "disable_auto_failback": false, 00:14:38.829 "generate_uuids": false, 00:14:38.829 "transport_tos": 0, 00:14:38.829 "nvme_error_stat": false, 00:14:38.829 "rdma_srq_size": 0, 00:14:38.829 "io_path_stat": false, 00:14:38.829 "allow_accel_sequence": false, 00:14:38.829 "rdma_max_cq_size": 0, 00:14:38.829 "rdma_cm_event_timeout_ms": 0, 00:14:38.829 "dhchap_digests": [ 00:14:38.829 "sha256", 00:14:38.829 "sha384", 00:14:38.829 "sha512" 00:14:38.829 ], 00:14:38.829 "dhchap_dhgroups": [ 00:14:38.829 "null", 00:14:38.829 "ffdhe2048", 00:14:38.829 "ffdhe3072", 00:14:38.829 "ffdhe4096", 00:14:38.829 "ffdhe6144", 00:14:38.829 "ffdhe8192" 00:14:38.829 ] 00:14:38.829 } 00:14:38.829 }, 00:14:38.829 { 00:14:38.829 "method": "bdev_nvme_attach_controller", 00:14:38.829 "params": { 00:14:38.829 "name": "TLSTEST", 00:14:38.829 "trtype": "TCP", 00:14:38.829 "adrfam": "IPv4", 00:14:38.829 "traddr": "10.0.0.3", 00:14:38.829 "trsvcid": "4420", 00:14:38.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.829 "prchk_reftag": false, 00:14:38.829 "prchk_guard": false, 00:14:38.829 "ctrlr_loss_timeout_sec": 0, 00:14:38.829 "reconnect_delay_sec": 0, 00:14:38.829 "fast_io_fail_timeout_sec": 0, 00:14:38.829 "psk": "key0", 00:14:38.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.829 "hdgst": false, 00:14:38.829 "ddgst": false, 00:14:38.829 "multipath": "multipath" 00:14:38.829 } 00:14:38.829 }, 00:14:38.829 { 00:14:38.829 "method": "bdev_nvme_set_hotplug", 00:14:38.829 "params": { 00:14:38.829 "period_us": 100000, 00:14:38.829 "enable": false 00:14:38.829 } 00:14:38.829 }, 00:14:38.829 { 00:14:38.829 "method": "bdev_wait_for_examine" 00:14:38.829 } 00:14:38.829 ] 00:14:38.829 }, 00:14:38.829 { 00:14:38.829 "subsystem": "nbd", 00:14:38.829 "config": [] 00:14:38.829 } 00:14:38.829 ] 00:14:38.829 }' 00:14:38.829 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:38.829 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.829 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.829 [2024-10-01 13:42:30.657825] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:38.829 [2024-10-01 13:42:30.657918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72553 ] 00:14:39.087 [2024-10-01 13:42:30.788278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.087 [2024-10-01 13:42:30.849570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.345 [2024-10-01 13:42:30.962595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:39.345 [2024-10-01 13:42:30.995765] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:40.280 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.280 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:40.280 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:40.280 Running I/O for 10 seconds... 00:14:50.248 3751.00 IOPS, 14.65 MiB/s 3793.00 IOPS, 14.82 MiB/s 3803.67 IOPS, 14.86 MiB/s 3841.50 IOPS, 15.01 MiB/s 3850.80 IOPS, 15.04 MiB/s 3842.17 IOPS, 15.01 MiB/s 3795.00 IOPS, 14.82 MiB/s 3752.50 IOPS, 14.66 MiB/s 3756.56 IOPS, 14.67 MiB/s 3769.60 IOPS, 14.72 MiB/s 00:14:50.248 Latency(us) 00:14:50.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.248 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:50.248 Verification LBA range: start 0x0 length 0x2000 00:14:50.248 TLSTESTn1 : 10.02 3774.89 14.75 0.00 0.00 33842.47 6791.91 34555.35 00:14:50.248 =================================================================================================================== 00:14:50.248 Total : 3774.89 14.75 0.00 0.00 33842.47 6791.91 34555.35 00:14:50.248 { 00:14:50.248 "results": [ 00:14:50.248 { 00:14:50.248 "job": "TLSTESTn1", 00:14:50.248 "core_mask": "0x4", 00:14:50.248 "workload": "verify", 00:14:50.248 "status": "finished", 00:14:50.248 "verify_range": { 00:14:50.248 "start": 0, 00:14:50.248 "length": 8192 00:14:50.248 }, 00:14:50.248 "queue_depth": 128, 00:14:50.248 "io_size": 4096, 00:14:50.248 "runtime": 10.019106, 00:14:50.248 "iops": 3774.887699561218, 00:14:50.248 "mibps": 14.745655076411008, 00:14:50.248 "io_failed": 0, 00:14:50.248 "io_timeout": 0, 00:14:50.248 "avg_latency_us": 33842.47087279553, 00:14:50.248 "min_latency_us": 6791.912727272727, 00:14:50.248 "max_latency_us": 34555.34545454545 00:14:50.248 } 00:14:50.248 ], 00:14:50.248 "core_count": 1 00:14:50.248 } 00:14:50.248 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:50.248 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72553 00:14:50.248 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72553 ']' 00:14:50.248 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72553 00:14:50.248 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:50.248 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:50.248 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72553 00:14:50.248 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:50.248 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:50.248 killing process with pid 72553 00:14:50.248 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72553' 00:14:50.248 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72553 00:14:50.248 Received shutdown signal, test time was about 10.000000 seconds 00:14:50.248 00:14:50.248 Latency(us) 00:14:50.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.248 =================================================================================================================== 00:14:50.248 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:50.248 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72553 00:14:50.508 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72521 00:14:50.508 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72521 ']' 00:14:50.508 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72521 00:14:50.508 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:50.508 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:50.508 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72521 00:14:50.508 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:50.508 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:50.508 killing process with pid 72521 00:14:50.508 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72521' 00:14:50.508 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72521 00:14:50.508 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72521 00:14:50.767 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:50.767 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:50.767 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:50.767 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.767 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=72694 00:14:50.767 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:50.767 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 72694 00:14:50.767 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72694 ']' 00:14:50.767 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.767 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.767 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.767 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.767 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.767 [2024-10-01 13:42:42.529503] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:50.767 [2024-10-01 13:42:42.529628] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.025 [2024-10-01 13:42:42.666179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.025 [2024-10-01 13:42:42.730659] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.025 [2024-10-01 13:42:42.730711] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.025 [2024-10-01 13:42:42.730723] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.025 [2024-10-01 13:42:42.730731] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.025 [2024-10-01 13:42:42.730739] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.025 [2024-10-01 13:42:42.730769] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.025 [2024-10-01 13:42:42.761119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:51.025 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:51.025 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:51.025 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:51.025 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:51.025 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:51.025 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.025 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ifjnsatssg 00:14:51.025 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ifjnsatssg 00:14:51.025 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:51.283 [2024-10-01 13:42:43.133812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.541 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:51.816 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:52.076 [2024-10-01 13:42:43.809919] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:52.076 [2024-10-01 13:42:43.810159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:52.076 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:52.335 malloc0 00:14:52.335 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:52.902 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ifjnsatssg 00:14:52.902 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:53.160 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72742 00:14:53.160 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:53.160 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:53.160 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72742 /var/tmp/bdevperf.sock 00:14:53.160 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72742 ']' 00:14:53.160 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.160 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:53.160 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.160 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:53.160 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.419 [2024-10-01 13:42:45.080876] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:53.419 [2024-10-01 13:42:45.081007] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72742 ] 00:14:53.419 [2024-10-01 13:42:45.225770] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.677 [2024-10-01 13:42:45.315322] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.677 [2024-10-01 13:42:45.349451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:54.614 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.614 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:54.614 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ifjnsatssg 00:14:54.614 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:54.872 [2024-10-01 13:42:46.651062] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:54.872 nvme0n1 00:14:55.130 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:55.130 Running I/O for 1 seconds... 00:14:56.117 3844.00 IOPS, 15.02 MiB/s 00:14:56.117 Latency(us) 00:14:56.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.117 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.117 Verification LBA range: start 0x0 length 0x2000 00:14:56.117 nvme0n1 : 1.02 3905.33 15.26 0.00 0.00 32492.22 7060.01 34078.72 00:14:56.117 =================================================================================================================== 00:14:56.117 Total : 3905.33 15.26 0.00 0.00 32492.22 7060.01 34078.72 00:14:56.117 { 00:14:56.117 "results": [ 00:14:56.117 { 00:14:56.117 "job": "nvme0n1", 00:14:56.117 "core_mask": "0x2", 00:14:56.117 "workload": "verify", 00:14:56.117 "status": "finished", 00:14:56.117 "verify_range": { 00:14:56.117 "start": 0, 00:14:56.117 "length": 8192 00:14:56.117 }, 00:14:56.117 "queue_depth": 128, 00:14:56.117 "io_size": 4096, 00:14:56.117 "runtime": 1.017328, 00:14:56.117 "iops": 3905.3284683012753, 00:14:56.117 "mibps": 15.255189329301857, 00:14:56.117 "io_failed": 0, 00:14:56.117 "io_timeout": 0, 00:14:56.117 "avg_latency_us": 32492.22409811684, 00:14:56.117 "min_latency_us": 7060.014545454545, 00:14:56.117 "max_latency_us": 34078.72 00:14:56.117 } 00:14:56.117 ], 00:14:56.117 "core_count": 1 00:14:56.117 } 00:14:56.117 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72742 00:14:56.117 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72742 ']' 00:14:56.117 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72742 00:14:56.117 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:56.117 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:56.117 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72742 00:14:56.117 killing process with pid 72742 00:14:56.117 Received shutdown signal, test time was about 1.000000 seconds 00:14:56.117 00:14:56.117 Latency(us) 00:14:56.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.117 =================================================================================================================== 00:14:56.117 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:56.117 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:56.117 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:56.117 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72742' 00:14:56.117 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72742 00:14:56.117 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72742 00:14:56.376 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72694 00:14:56.376 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72694 ']' 00:14:56.376 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72694 00:14:56.376 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:56.376 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:56.376 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72694 00:14:56.376 killing process with pid 72694 00:14:56.376 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:56.376 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:56.376 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72694' 00:14:56.376 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72694 00:14:56.376 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72694 00:14:56.635 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:56.635 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:56.635 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:56.635 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.635 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:56.635 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=72794 00:14:56.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.635 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 72794 00:14:56.635 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72794 ']' 00:14:56.635 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.635 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:56.635 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.635 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:56.635 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.635 [2024-10-01 13:42:48.374358] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:56.635 [2024-10-01 13:42:48.374734] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.894 [2024-10-01 13:42:48.515984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.894 [2024-10-01 13:42:48.594275] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.894 [2024-10-01 13:42:48.594342] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.894 [2024-10-01 13:42:48.594356] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.894 [2024-10-01 13:42:48.594367] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.894 [2024-10-01 13:42:48.594376] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.894 [2024-10-01 13:42:48.594407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.894 [2024-10-01 13:42:48.628796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.830 [2024-10-01 13:42:49.466744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.830 malloc0 00:14:57.830 [2024-10-01 13:42:49.510412] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:57.830 [2024-10-01 13:42:49.510646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72832 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:57.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72832 /var/tmp/bdevperf.sock 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72832 ']' 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:57.830 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.830 [2024-10-01 13:42:49.595567] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:14:57.830 [2024-10-01 13:42:49.595929] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72832 ] 00:14:58.088 [2024-10-01 13:42:49.733029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.088 [2024-10-01 13:42:49.806651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.088 [2024-10-01 13:42:49.840081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:58.088 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:58.088 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:58.088 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ifjnsatssg 00:14:58.660 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:58.939 [2024-10-01 13:42:50.537382] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:58.939 nvme0n1 00:14:58.939 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:58.939 Running I/O for 1 seconds... 00:15:00.312 3919.00 IOPS, 15.31 MiB/s 00:15:00.312 Latency(us) 00:15:00.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.312 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.312 Verification LBA range: start 0x0 length 0x2000 00:15:00.312 nvme0n1 : 1.02 3975.28 15.53 0.00 0.00 31890.61 5838.66 26452.71 00:15:00.312 =================================================================================================================== 00:15:00.312 Total : 3975.28 15.53 0.00 0.00 31890.61 5838.66 26452.71 00:15:00.312 { 00:15:00.312 "results": [ 00:15:00.312 { 00:15:00.312 "job": "nvme0n1", 00:15:00.312 "core_mask": "0x2", 00:15:00.312 "workload": "verify", 00:15:00.312 "status": "finished", 00:15:00.312 "verify_range": { 00:15:00.312 "start": 0, 00:15:00.312 "length": 8192 00:15:00.312 }, 00:15:00.312 "queue_depth": 128, 00:15:00.312 "io_size": 4096, 00:15:00.312 "runtime": 1.018293, 00:15:00.312 "iops": 3975.28019931395, 00:15:00.312 "mibps": 15.528438278570118, 00:15:00.312 "io_failed": 0, 00:15:00.312 "io_timeout": 0, 00:15:00.312 "avg_latency_us": 31890.612662594325, 00:15:00.312 "min_latency_us": 5838.6618181818185, 00:15:00.312 "max_latency_us": 26452.712727272727 00:15:00.312 } 00:15:00.312 ], 00:15:00.312 "core_count": 1 00:15:00.312 } 00:15:00.312 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:00.312 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.312 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.312 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.312 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:00.312 "subsystems": [ 00:15:00.312 { 00:15:00.312 "subsystem": "keyring", 00:15:00.312 "config": [ 00:15:00.312 { 00:15:00.312 "method": "keyring_file_add_key", 00:15:00.312 "params": { 00:15:00.312 "name": "key0", 00:15:00.312 "path": "/tmp/tmp.ifjnsatssg" 00:15:00.312 } 00:15:00.312 } 00:15:00.312 ] 00:15:00.312 }, 00:15:00.312 { 00:15:00.312 "subsystem": "iobuf", 00:15:00.312 "config": [ 00:15:00.312 { 00:15:00.312 "method": "iobuf_set_options", 00:15:00.312 "params": { 00:15:00.312 "small_pool_count": 8192, 00:15:00.312 "large_pool_count": 1024, 00:15:00.312 "small_bufsize": 8192, 00:15:00.312 "large_bufsize": 135168 00:15:00.312 } 00:15:00.312 } 00:15:00.312 ] 00:15:00.312 }, 00:15:00.312 { 00:15:00.312 "subsystem": "sock", 00:15:00.312 "config": [ 00:15:00.312 { 00:15:00.312 "method": "sock_set_default_impl", 00:15:00.312 "params": { 00:15:00.312 "impl_name": "uring" 00:15:00.312 } 00:15:00.312 }, 00:15:00.312 { 00:15:00.312 "method": "sock_impl_set_options", 00:15:00.312 "params": { 00:15:00.312 "impl_name": "ssl", 00:15:00.312 "recv_buf_size": 4096, 00:15:00.312 "send_buf_size": 4096, 00:15:00.312 "enable_recv_pipe": true, 00:15:00.312 "enable_quickack": false, 00:15:00.312 "enable_placement_id": 0, 00:15:00.312 "enable_zerocopy_send_server": true, 00:15:00.312 "enable_zerocopy_send_client": false, 00:15:00.312 "zerocopy_threshold": 0, 00:15:00.312 "tls_version": 0, 00:15:00.312 "enable_ktls": false 00:15:00.312 } 00:15:00.312 }, 00:15:00.312 { 00:15:00.312 "method": "sock_impl_set_options", 00:15:00.312 "params": { 00:15:00.312 "impl_name": "posix", 00:15:00.312 "recv_buf_size": 2097152, 00:15:00.312 "send_buf_size": 2097152, 00:15:00.312 "enable_recv_pipe": true, 00:15:00.312 "enable_quickack": false, 00:15:00.312 "enable_placement_id": 0, 00:15:00.312 "enable_zerocopy_send_server": true, 00:15:00.312 "enable_zerocopy_send_client": false, 00:15:00.312 "zerocopy_threshold": 0, 00:15:00.312 "tls_version": 0, 00:15:00.312 "enable_ktls": false 00:15:00.312 } 00:15:00.312 }, 00:15:00.312 { 00:15:00.312 "method": "sock_impl_set_options", 00:15:00.312 "params": { 00:15:00.312 "impl_name": "uring", 00:15:00.312 "recv_buf_size": 2097152, 00:15:00.312 "send_buf_size": 2097152, 00:15:00.312 "enable_recv_pipe": true, 00:15:00.312 "enable_quickack": false, 00:15:00.312 "enable_placement_id": 0, 00:15:00.312 "enable_zerocopy_send_server": false, 00:15:00.312 "enable_zerocopy_send_client": false, 00:15:00.312 "zerocopy_threshold": 0, 00:15:00.312 "tls_version": 0, 00:15:00.312 "enable_ktls": false 00:15:00.312 } 00:15:00.312 } 00:15:00.312 ] 00:15:00.312 }, 00:15:00.312 { 00:15:00.312 "subsystem": "vmd", 00:15:00.312 "config": [] 00:15:00.312 }, 00:15:00.312 { 00:15:00.312 "subsystem": "accel", 00:15:00.312 "config": [ 00:15:00.312 { 00:15:00.312 "method": "accel_set_options", 00:15:00.312 "params": { 00:15:00.312 "small_cache_size": 128, 00:15:00.312 "large_cache_size": 16, 00:15:00.312 "task_count": 2048, 00:15:00.312 "sequence_count": 2048, 00:15:00.312 "buf_count": 2048 00:15:00.312 } 00:15:00.312 } 00:15:00.312 ] 00:15:00.312 }, 00:15:00.312 { 00:15:00.312 "subsystem": "bdev", 00:15:00.312 "config": [ 00:15:00.312 { 00:15:00.312 "method": "bdev_set_options", 00:15:00.312 "params": { 00:15:00.313 "bdev_io_pool_size": 65535, 00:15:00.313 "bdev_io_cache_size": 256, 00:15:00.313 "bdev_auto_examine": true, 00:15:00.313 "iobuf_small_cache_size": 128, 00:15:00.313 "iobuf_large_cache_size": 16 00:15:00.313 } 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "method": "bdev_raid_set_options", 00:15:00.313 "params": { 00:15:00.313 "process_window_size_kb": 1024, 00:15:00.313 "process_max_bandwidth_mb_sec": 0 00:15:00.313 } 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "method": "bdev_iscsi_set_options", 00:15:00.313 "params": { 00:15:00.313 "timeout_sec": 30 00:15:00.313 } 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "method": "bdev_nvme_set_options", 00:15:00.313 "params": { 00:15:00.313 "action_on_timeout": "none", 00:15:00.313 "timeout_us": 0, 00:15:00.313 "timeout_admin_us": 0, 00:15:00.313 "keep_alive_timeout_ms": 10000, 00:15:00.313 "arbitration_burst": 0, 00:15:00.313 "low_priority_weight": 0, 00:15:00.313 "medium_priority_weight": 0, 00:15:00.313 "high_priority_weight": 0, 00:15:00.313 "nvme_adminq_poll_period_us": 10000, 00:15:00.313 "nvme_ioq_poll_period_us": 0, 00:15:00.313 "io_queue_requests": 0, 00:15:00.313 "delay_cmd_submit": true, 00:15:00.313 "transport_retry_count": 4, 00:15:00.313 "bdev_retry_count": 3, 00:15:00.313 "transport_ack_timeout": 0, 00:15:00.313 "ctrlr_loss_timeout_sec": 0, 00:15:00.313 "reconnect_delay_sec": 0, 00:15:00.313 "fast_io_fail_timeout_sec": 0, 00:15:00.313 "disable_auto_failback": false, 00:15:00.313 "generate_uuids": false, 00:15:00.313 "transport_tos": 0, 00:15:00.313 "nvme_error_stat": false, 00:15:00.313 "rdma_srq_size": 0, 00:15:00.313 "io_path_stat": false, 00:15:00.313 "allow_accel_sequence": false, 00:15:00.313 "rdma_max_cq_size": 0, 00:15:00.313 "rdma_cm_event_timeout_ms": 0, 00:15:00.313 "dhchap_digests": [ 00:15:00.313 "sha256", 00:15:00.313 "sha384", 00:15:00.313 "sha512" 00:15:00.313 ], 00:15:00.313 "dhchap_dhgroups": [ 00:15:00.313 "null", 00:15:00.313 "ffdhe2048", 00:15:00.313 "ffdhe3072", 00:15:00.313 "ffdhe4096", 00:15:00.313 "ffdhe6144", 00:15:00.313 "ffdhe8192" 00:15:00.313 ] 00:15:00.313 } 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "method": "bdev_nvme_set_hotplug", 00:15:00.313 "params": { 00:15:00.313 "period_us": 100000, 00:15:00.313 "enable": false 00:15:00.313 } 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "method": "bdev_malloc_create", 00:15:00.313 "params": { 00:15:00.313 "name": "malloc0", 00:15:00.313 "num_blocks": 8192, 00:15:00.313 "block_size": 4096, 00:15:00.313 "physical_block_size": 4096, 00:15:00.313 "uuid": "6818fe7b-e4fd-4803-a7ff-b017224c1db5", 00:15:00.313 "optimal_io_boundary": 0, 00:15:00.313 "md_size": 0, 00:15:00.313 "dif_type": 0, 00:15:00.313 "dif_is_head_of_md": false, 00:15:00.313 "dif_pi_format": 0 00:15:00.313 } 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "method": "bdev_wait_for_examine" 00:15:00.313 } 00:15:00.313 ] 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "subsystem": "nbd", 00:15:00.313 "config": [] 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "subsystem": "scheduler", 00:15:00.313 "config": [ 00:15:00.313 { 00:15:00.313 "method": "framework_set_scheduler", 00:15:00.313 "params": { 00:15:00.313 "name": "static" 00:15:00.313 } 00:15:00.313 } 00:15:00.313 ] 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "subsystem": "nvmf", 00:15:00.313 "config": [ 00:15:00.313 { 00:15:00.313 "method": "nvmf_set_config", 00:15:00.313 "params": { 00:15:00.313 "discovery_filter": "match_any", 00:15:00.313 "admin_cmd_passthru": { 00:15:00.313 "identify_ctrlr": false 00:15:00.313 }, 00:15:00.313 "dhchap_digests": [ 00:15:00.313 "sha256", 00:15:00.313 "sha384", 00:15:00.313 "sha512" 00:15:00.313 ], 00:15:00.313 "dhchap_dhgroups": [ 00:15:00.313 "null", 00:15:00.313 "ffdhe2048", 00:15:00.313 "ffdhe3072", 00:15:00.313 "ffdhe4096", 00:15:00.313 "ffdhe6144", 00:15:00.313 "ffdhe8192" 00:15:00.313 ] 00:15:00.313 } 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "method": "nvmf_set_max_subsystems", 00:15:00.313 "params": { 00:15:00.313 "max_subsystems": 1024 00:15:00.313 } 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "method": "nvmf_set_crdt", 00:15:00.313 "params": { 00:15:00.313 "crdt1": 0, 00:15:00.313 "crdt2": 0, 00:15:00.313 "crdt3": 0 00:15:00.313 } 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "method": "nvmf_create_transport", 00:15:00.313 "params": { 00:15:00.313 "trtype": "TCP", 00:15:00.313 "max_queue_depth": 128, 00:15:00.313 "max_io_qpairs_per_ctrlr": 127, 00:15:00.313 "in_capsule_data_size": 4096, 00:15:00.313 "max_io_size": 131072, 00:15:00.313 "io_unit_size": 131072, 00:15:00.313 "max_aq_depth": 128, 00:15:00.313 "num_shared_buffers": 511, 00:15:00.313 "buf_cache_size": 4294967295, 00:15:00.313 "dif_insert_or_strip": false, 00:15:00.313 "zcopy": false, 00:15:00.313 "c2h_success": false, 00:15:00.313 "sock_priority": 0, 00:15:00.313 "abort_timeout_sec": 1, 00:15:00.313 "ack_timeout": 0, 00:15:00.313 "data_wr_pool_size": 0 00:15:00.313 } 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "method": "nvmf_create_subsystem", 00:15:00.313 "params": { 00:15:00.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.313 "allow_any_host": false, 00:15:00.313 "serial_number": "00000000000000000000", 00:15:00.313 "model_number": "SPDK bdev Controller", 00:15:00.313 "max_namespaces": 32, 00:15:00.313 "min_cntlid": 1, 00:15:00.313 "max_cntlid": 65519, 00:15:00.313 "ana_reporting": false 00:15:00.313 } 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "method": "nvmf_subsystem_add_host", 00:15:00.313 "params": { 00:15:00.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.313 "host": "nqn.2016-06.io.spdk:host1", 00:15:00.313 "psk": "key0" 00:15:00.313 } 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "method": "nvmf_subsystem_add_ns", 00:15:00.313 "params": { 00:15:00.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.313 "namespace": { 00:15:00.313 "nsid": 1, 00:15:00.313 "bdev_name": "malloc0", 00:15:00.313 "nguid": "6818FE7BE4FD4803A7FFB017224C1DB5", 00:15:00.313 "uuid": "6818fe7b-e4fd-4803-a7ff-b017224c1db5", 00:15:00.313 "no_auto_visible": false 00:15:00.313 } 00:15:00.313 } 00:15:00.313 }, 00:15:00.313 { 00:15:00.313 "method": "nvmf_subsystem_add_listener", 00:15:00.313 "params": { 00:15:00.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.313 "listen_address": { 00:15:00.313 "trtype": "TCP", 00:15:00.313 "adrfam": "IPv4", 00:15:00.313 "traddr": "10.0.0.3", 00:15:00.313 "trsvcid": "4420" 00:15:00.313 }, 00:15:00.313 "secure_channel": false, 00:15:00.313 "sock_impl": "ssl" 00:15:00.313 } 00:15:00.313 } 00:15:00.313 ] 00:15:00.313 } 00:15:00.313 ] 00:15:00.313 }' 00:15:00.313 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:00.571 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:00.571 "subsystems": [ 00:15:00.571 { 00:15:00.571 "subsystem": "keyring", 00:15:00.571 "config": [ 00:15:00.571 { 00:15:00.571 "method": "keyring_file_add_key", 00:15:00.571 "params": { 00:15:00.571 "name": "key0", 00:15:00.571 "path": "/tmp/tmp.ifjnsatssg" 00:15:00.571 } 00:15:00.571 } 00:15:00.571 ] 00:15:00.571 }, 00:15:00.571 { 00:15:00.571 "subsystem": "iobuf", 00:15:00.571 "config": [ 00:15:00.571 { 00:15:00.571 "method": "iobuf_set_options", 00:15:00.571 "params": { 00:15:00.571 "small_pool_count": 8192, 00:15:00.571 "large_pool_count": 1024, 00:15:00.571 "small_bufsize": 8192, 00:15:00.571 "large_bufsize": 135168 00:15:00.571 } 00:15:00.571 } 00:15:00.571 ] 00:15:00.571 }, 00:15:00.571 { 00:15:00.571 "subsystem": "sock", 00:15:00.571 "config": [ 00:15:00.571 { 00:15:00.571 "method": "sock_set_default_impl", 00:15:00.571 "params": { 00:15:00.571 "impl_name": "uring" 00:15:00.571 } 00:15:00.571 }, 00:15:00.571 { 00:15:00.571 "method": "sock_impl_set_options", 00:15:00.571 "params": { 00:15:00.571 "impl_name": "ssl", 00:15:00.571 "recv_buf_size": 4096, 00:15:00.571 "send_buf_size": 4096, 00:15:00.571 "enable_recv_pipe": true, 00:15:00.571 "enable_quickack": false, 00:15:00.571 "enable_placement_id": 0, 00:15:00.571 "enable_zerocopy_send_server": true, 00:15:00.571 "enable_zerocopy_send_client": false, 00:15:00.571 "zerocopy_threshold": 0, 00:15:00.571 "tls_version": 0, 00:15:00.571 "enable_ktls": false 00:15:00.571 } 00:15:00.571 }, 00:15:00.571 { 00:15:00.571 "method": "sock_impl_set_options", 00:15:00.571 "params": { 00:15:00.571 "impl_name": "posix", 00:15:00.571 "recv_buf_size": 2097152, 00:15:00.571 "send_buf_size": 2097152, 00:15:00.571 "enable_recv_pipe": true, 00:15:00.571 "enable_quickack": false, 00:15:00.571 "enable_placement_id": 0, 00:15:00.571 "enable_zerocopy_send_server": true, 00:15:00.571 "enable_zerocopy_send_client": false, 00:15:00.571 "zerocopy_threshold": 0, 00:15:00.571 "tls_version": 0, 00:15:00.571 "enable_ktls": false 00:15:00.571 } 00:15:00.571 }, 00:15:00.571 { 00:15:00.571 "method": "sock_impl_set_options", 00:15:00.571 "params": { 00:15:00.571 "impl_name": "uring", 00:15:00.571 "recv_buf_size": 2097152, 00:15:00.571 "send_buf_size": 2097152, 00:15:00.571 "enable_recv_pipe": true, 00:15:00.571 "enable_quickack": false, 00:15:00.571 "enable_placement_id": 0, 00:15:00.571 "enable_zerocopy_send_server": false, 00:15:00.571 "enable_zerocopy_send_client": false, 00:15:00.571 "zerocopy_threshold": 0, 00:15:00.571 "tls_version": 0, 00:15:00.571 "enable_ktls": false 00:15:00.571 } 00:15:00.571 } 00:15:00.571 ] 00:15:00.571 }, 00:15:00.571 { 00:15:00.571 "subsystem": "vmd", 00:15:00.571 "config": [] 00:15:00.571 }, 00:15:00.571 { 00:15:00.571 "subsystem": "accel", 00:15:00.571 "config": [ 00:15:00.571 { 00:15:00.571 "method": "accel_set_options", 00:15:00.571 "params": { 00:15:00.571 "small_cache_size": 128, 00:15:00.571 "large_cache_size": 16, 00:15:00.571 "task_count": 2048, 00:15:00.571 "sequence_count": 2048, 00:15:00.571 "buf_count": 2048 00:15:00.571 } 00:15:00.571 } 00:15:00.571 ] 00:15:00.571 }, 00:15:00.571 { 00:15:00.571 "subsystem": "bdev", 00:15:00.571 "config": [ 00:15:00.571 { 00:15:00.571 "method": "bdev_set_options", 00:15:00.571 "params": { 00:15:00.571 "bdev_io_pool_size": 65535, 00:15:00.571 "bdev_io_cache_size": 256, 00:15:00.571 "bdev_auto_examine": true, 00:15:00.571 "iobuf_small_cache_size": 128, 00:15:00.571 "iobuf_large_cache_size": 16 00:15:00.571 } 00:15:00.571 }, 00:15:00.571 { 00:15:00.571 "method": "bdev_raid_set_options", 00:15:00.571 "params": { 00:15:00.571 "process_window_size_kb": 1024, 00:15:00.571 "process_max_bandwidth_mb_sec": 0 00:15:00.571 } 00:15:00.571 }, 00:15:00.571 { 00:15:00.571 "method": "bdev_iscsi_set_options", 00:15:00.571 "params": { 00:15:00.571 "timeout_sec": 30 00:15:00.571 } 00:15:00.572 }, 00:15:00.572 { 00:15:00.572 "method": "bdev_nvme_set_options", 00:15:00.572 "params": { 00:15:00.572 "action_on_timeout": "none", 00:15:00.572 "timeout_us": 0, 00:15:00.572 "timeout_admin_us": 0, 00:15:00.572 "keep_alive_timeout_ms": 10000, 00:15:00.572 "arbitration_burst": 0, 00:15:00.572 "low_priority_weight": 0, 00:15:00.572 "medium_priority_weight": 0, 00:15:00.572 "high_priority_weight": 0, 00:15:00.572 "nvme_adminq_poll_period_us": 10000, 00:15:00.572 "nvme_ioq_poll_period_us": 0, 00:15:00.572 "io_queue_requests": 512, 00:15:00.572 "delay_cmd_submit": true, 00:15:00.572 "transport_retry_count": 4, 00:15:00.572 "bdev_retry_count": 3, 00:15:00.572 "transport_ack_timeout": 0, 00:15:00.572 "ctrlr_loss_timeout_sec": 0, 00:15:00.572 "reconnect_delay_sec": 0, 00:15:00.572 "fast_io_fail_timeout_sec": 0, 00:15:00.572 "disable_auto_failback": false, 00:15:00.572 "generate_uuids": false, 00:15:00.572 "transport_tos": 0, 00:15:00.572 "nvme_error_stat": false, 00:15:00.572 "rdma_srq_size": 0, 00:15:00.572 "io_path_stat": false, 00:15:00.572 "allow_accel_sequence": false, 00:15:00.572 "rdma_max_cq_size": 0, 00:15:00.572 "rdma_cm_event_timeout_ms": 0, 00:15:00.572 "dhchap_digests": [ 00:15:00.572 "sha256", 00:15:00.572 "sha384", 00:15:00.572 "sha512" 00:15:00.572 ], 00:15:00.572 "dhchap_dhgroups": [ 00:15:00.572 "null", 00:15:00.572 "ffdhe2048", 00:15:00.572 "ffdhe3072", 00:15:00.572 "ffdhe4096", 00:15:00.572 "ffdhe6144", 00:15:00.572 "ffdhe8192" 00:15:00.572 ] 00:15:00.572 } 00:15:00.572 }, 00:15:00.572 { 00:15:00.572 "method": "bdev_nvme_attach_controller", 00:15:00.572 "params": { 00:15:00.572 "name": "nvme0", 00:15:00.572 "trtype": "TCP", 00:15:00.572 "adrfam": "IPv4", 00:15:00.572 "traddr": "10.0.0.3", 00:15:00.572 "trsvcid": "4420", 00:15:00.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.572 "prchk_reftag": false, 00:15:00.572 "prchk_guard": false, 00:15:00.572 "ctrlr_loss_timeout_sec": 0, 00:15:00.572 "reconnect_delay_sec": 0, 00:15:00.572 "fast_io_fail_timeout_sec": 0, 00:15:00.572 "psk": "key0", 00:15:00.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:00.572 "hdgst": false, 00:15:00.572 "ddgst": false, 00:15:00.572 "multipath": "multipath" 00:15:00.572 } 00:15:00.572 }, 00:15:00.572 { 00:15:00.572 "method": "bdev_nvme_set_hotplug", 00:15:00.572 "params": { 00:15:00.572 "period_us": 100000, 00:15:00.572 "enable": false 00:15:00.572 } 00:15:00.572 }, 00:15:00.572 { 00:15:00.572 "method": "bdev_enable_histogram", 00:15:00.572 "params": { 00:15:00.572 "name": "nvme0n1", 00:15:00.572 "enable": true 00:15:00.572 } 00:15:00.572 }, 00:15:00.572 { 00:15:00.572 "method": "bdev_wait_for_examine" 00:15:00.572 } 00:15:00.572 ] 00:15:00.572 }, 00:15:00.572 { 00:15:00.572 "subsystem": "nbd", 00:15:00.572 "config": [] 00:15:00.572 } 00:15:00.572 ] 00:15:00.572 }' 00:15:00.572 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72832 00:15:00.572 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72832 ']' 00:15:00.572 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72832 00:15:00.572 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:00.572 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:00.572 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72832 00:15:00.572 killing process with pid 72832 00:15:00.572 Received shutdown signal, test time was about 1.000000 seconds 00:15:00.572 00:15:00.572 Latency(us) 00:15:00.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.572 =================================================================================================================== 00:15:00.572 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:00.572 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:00.572 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:00.572 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72832' 00:15:00.572 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72832 00:15:00.572 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72832 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72794 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72794 ']' 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72794 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72794 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72794' 00:15:00.830 killing process with pid 72794 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72794 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72794 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.830 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:00.830 "subsystems": [ 00:15:00.830 { 00:15:00.830 "subsystem": "keyring", 00:15:00.830 "config": [ 00:15:00.830 { 00:15:00.830 "method": "keyring_file_add_key", 00:15:00.830 "params": { 00:15:00.830 "name": "key0", 00:15:00.830 "path": "/tmp/tmp.ifjnsatssg" 00:15:00.830 } 00:15:00.830 } 00:15:00.830 ] 00:15:00.830 }, 00:15:00.830 { 00:15:00.830 "subsystem": "iobuf", 00:15:00.830 "config": [ 00:15:00.830 { 00:15:00.830 "method": "iobuf_set_options", 00:15:00.830 "params": { 00:15:00.830 "small_pool_count": 8192, 00:15:00.830 "large_pool_count": 1024, 00:15:00.830 "small_bufsize": 8192, 00:15:00.830 "large_bufsize": 135168 00:15:00.830 } 00:15:00.830 } 00:15:00.830 ] 00:15:00.830 }, 00:15:00.830 { 00:15:00.830 "subsystem": "sock", 00:15:00.830 "config": [ 00:15:00.830 { 00:15:00.830 "method": "sock_set_default_impl", 00:15:00.830 "params": { 00:15:00.830 "impl_name": "uring" 00:15:00.830 } 00:15:00.830 }, 00:15:00.830 { 00:15:00.830 "method": "sock_impl_set_options", 00:15:00.830 "params": { 00:15:00.830 "impl_name": "ssl", 00:15:00.830 "recv_buf_size": 4096, 00:15:00.830 "send_buf_size": 4096, 00:15:00.830 "enable_recv_pipe": true, 00:15:00.830 "enable_quickack": false, 00:15:00.830 "enable_placement_id": 0, 00:15:00.830 "enable_zerocopy_send_server": true, 00:15:00.830 "enable_zerocopy_send_client": false, 00:15:00.830 "zerocopy_threshold": 0, 00:15:00.830 "tls_version": 0, 00:15:00.830 "enable_ktls": false 00:15:00.830 } 00:15:00.830 }, 00:15:00.830 { 00:15:00.831 "method": "sock_impl_set_options", 00:15:00.831 "params": { 00:15:00.831 "impl_name": "posix", 00:15:00.831 "recv_buf_size": 2097152, 00:15:00.831 "send_buf_size": 2097152, 00:15:00.831 "enable_recv_pipe": true, 00:15:00.831 "enable_quickack": false, 00:15:00.831 "enable_placement_id": 0, 00:15:00.831 "enable_zerocopy_send_server": true, 00:15:00.831 "enable_zerocopy_send_client": false, 00:15:00.831 "zerocopy_threshold": 0, 00:15:00.831 "tls_version": 0, 00:15:00.831 "enable_ktls": false 00:15:00.831 } 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "method": "sock_impl_set_options", 00:15:00.831 "params": { 00:15:00.831 "impl_name": "uring", 00:15:00.831 "recv_buf_size": 2097152, 00:15:00.831 "send_buf_size": 2097152, 00:15:00.831 "enable_recv_pipe": true, 00:15:00.831 "enable_quickack": false, 00:15:00.831 "enable_placement_id": 0, 00:15:00.831 "enable_zerocopy_send_server": false, 00:15:00.831 "enable_zerocopy_send_client": false, 00:15:00.831 "zerocopy_threshold": 0, 00:15:00.831 "tls_version": 0, 00:15:00.831 "enable_ktls": false 00:15:00.831 } 00:15:00.831 } 00:15:00.831 ] 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "subsystem": "vmd", 00:15:00.831 "config": [] 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "subsystem": "accel", 00:15:00.831 "config": [ 00:15:00.831 { 00:15:00.831 "method": "accel_set_options", 00:15:00.831 "params": { 00:15:00.831 "small_cache_size": 128, 00:15:00.831 "large_cache_size": 16, 00:15:00.831 "task_count": 2048, 00:15:00.831 "sequence_count": 2048, 00:15:00.831 "buf_count": 2048 00:15:00.831 } 00:15:00.831 } 00:15:00.831 ] 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "subsystem": "bdev", 00:15:00.831 "config": [ 00:15:00.831 { 00:15:00.831 "method": "bdev_set_options", 00:15:00.831 "params": { 00:15:00.831 "bdev_io_pool_size": 65535, 00:15:00.831 "bdev_io_cache_size": 256, 00:15:00.831 "bdev_auto_examine": true, 00:15:00.831 "iobuf_small_cache_size": 128, 00:15:00.831 "iobuf_large_cache_size": 16 00:15:00.831 } 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "method": "bdev_raid_set_options", 00:15:00.831 "params": { 00:15:00.831 "process_window_size_kb": 1024, 00:15:00.831 "process_max_bandwidth_mb_sec": 0 00:15:00.831 } 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "method": "bdev_iscsi_set_options", 00:15:00.831 "params": { 00:15:00.831 "timeout_sec": 30 00:15:00.831 } 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "method": "bdev_nvme_set_options", 00:15:00.831 "params": { 00:15:00.831 "action_on_timeout": "none", 00:15:00.831 "timeout_us": 0, 00:15:00.831 "timeout_admin_us": 0, 00:15:00.831 "keep_alive_timeout_ms": 10000, 00:15:00.831 "arbitration_burst": 0, 00:15:00.831 "low_priority_weight": 0, 00:15:00.831 "medium_priority_weight": 0, 00:15:00.831 "high_priority_weight": 0, 00:15:00.831 "nvme_adminq_poll_period_us": 10000, 00:15:00.831 "nvme_ioq_poll_period_us": 0, 00:15:00.831 "io_queue_requests": 0, 00:15:00.831 "delay_cmd_submit": true, 00:15:00.831 "transport_retry_count": 4, 00:15:00.831 "bdev_retry_count": 3, 00:15:00.831 "transport_ack_timeout": 0, 00:15:00.831 "ctrlr_loss_timeout_sec": 0, 00:15:00.831 "reconnect_delay_sec": 0, 00:15:00.831 "fast_io_fail_timeout_sec": 0, 00:15:00.831 "disable_auto_failback": false, 00:15:00.831 "generate_uuids": false, 00:15:00.831 "transport_tos": 0, 00:15:00.831 "nvme_error_stat": false, 00:15:00.831 "rdma_srq_size": 0, 00:15:00.831 "io_path_stat": false, 00:15:00.831 "allow_accel_sequence": false, 00:15:00.831 "rdma_max_cq_size": 0, 00:15:00.831 "rdma_cm_event_timeout_ms": 0, 00:15:00.831 "dhchap_digests": [ 00:15:00.831 "sha256", 00:15:00.831 "sha384", 00:15:00.831 "sha512" 00:15:00.831 ], 00:15:00.831 "dhchap_dhgroups": [ 00:15:00.831 "null", 00:15:00.831 "ffdhe2048", 00:15:00.831 "ffdhe3072", 00:15:00.831 "ffdhe4096", 00:15:00.831 "ffdhe6144", 00:15:00.831 "ffdhe8192" 00:15:00.831 ] 00:15:00.831 } 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "method": "bdev_nvme_set_hotplug", 00:15:00.831 "params": { 00:15:00.831 "period_us": 100000, 00:15:00.831 "enable": false 00:15:00.831 } 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "method": "bdev_malloc_create", 00:15:00.831 "params": { 00:15:00.831 "name": "malloc0", 00:15:00.831 "num_blocks": 8192, 00:15:00.831 "block_size": 4096, 00:15:00.831 "physical_block_size": 4096, 00:15:00.831 "uuid": "6818fe7b-e4fd-4803-a7ff-b017224c1db5", 00:15:00.831 "optimal_io_boundary": 0, 00:15:00.831 "md_size": 0, 00:15:00.831 "dif_type": 0, 00:15:00.831 "dif_is_head_of_md": false, 00:15:00.831 "dif_pi_format": 0 00:15:00.831 } 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "method": "bdev_wait_for_examine" 00:15:00.831 } 00:15:00.831 ] 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "subsystem": "nbd", 00:15:00.831 "config": [] 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "subsystem": "scheduler", 00:15:00.831 "config": [ 00:15:00.831 { 00:15:00.831 "method": "framework_set_scheduler", 00:15:00.831 "params": { 00:15:00.831 "name": "static" 00:15:00.831 } 00:15:00.831 } 00:15:00.831 ] 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "subsystem": "nvmf", 00:15:00.831 "config": [ 00:15:00.831 { 00:15:00.831 "method": "nvmf_set_config", 00:15:00.831 "params": { 00:15:00.831 "discovery_filter": "match_any", 00:15:00.831 "admin_cmd_passthru": { 00:15:00.831 "identify_ctrlr": false 00:15:00.831 }, 00:15:00.831 "dhchap_digests": [ 00:15:00.831 "sha256", 00:15:00.831 "sha384", 00:15:00.831 "sha512" 00:15:00.831 ], 00:15:00.831 "dhchap_dhgroups": [ 00:15:00.831 "null", 00:15:00.831 "ffdhe2048", 00:15:00.831 "ffdhe3072", 00:15:00.831 "ffdhe4096", 00:15:00.831 "ffdhe6144", 00:15:00.831 "ffdhe8192" 00:15:00.831 ] 00:15:00.831 } 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "method": "nvmf_set_max_subsystems", 00:15:00.831 "params": { 00:15:00.831 "max_subsystems": 1024 00:15:00.831 } 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "method": "nvmf_set_crdt", 00:15:00.831 "params": { 00:15:00.831 "crdt1": 0, 00:15:00.831 "crdt2": 0, 00:15:00.831 "crdt3": 0 00:15:00.831 } 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "method": "nvmf_create_transport", 00:15:00.831 "params": { 00:15:00.831 "trtype": "TCP", 00:15:00.831 "max_queue_depth": 128, 00:15:00.831 "max_io_qpairs_per_ctrlr": 127, 00:15:00.831 "in_capsule_data_size": 4096, 00:15:00.831 "max_io_size": 131072, 00:15:00.831 "io_unit_size": 131072, 00:15:00.831 "max_aq_depth": 128, 00:15:00.831 "num_shared_buffers": 511, 00:15:00.831 "buf_cache_size": 4294967295, 00:15:00.831 "dif_insert_or_strip": false, 00:15:00.831 "zcopy": false, 00:15:00.831 "c2h_success": false, 00:15:00.831 "sock_priority": 0, 00:15:00.831 "abort_timeout_sec": 1, 00:15:00.831 "ack_timeout": 0, 00:15:00.831 "data_wr_pool_size": 0 00:15:00.831 } 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "method": "nvmf_create_subsystem", 00:15:00.831 "params": { 00:15:00.831 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.831 "allow_any_host": false, 00:15:00.831 "serial_number": "00000000000000000000", 00:15:00.831 "model_number": "SPDK bdev Controller", 00:15:00.831 "max_namespaces": 32, 00:15:00.831 "min_cntlid": 1, 00:15:00.831 "max_cntlid": 65519, 00:15:00.831 "ana_reporting": false 00:15:00.831 } 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "method": "nvmf_subsystem_add_host", 00:15:00.831 "params": { 00:15:00.831 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.831 "host": "nqn.2016-06.io.spdk:host1", 00:15:00.831 "psk": "key0" 00:15:00.831 } 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "method": "nvmf_subsystem_add_ns", 00:15:00.831 "params": { 00:15:00.831 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.831 "namespace": { 00:15:00.831 "nsid": 1, 00:15:00.831 "bdev_name": "malloc0", 00:15:00.831 "nguid": "6818FE7BE4FD4803A7FFB017224C1DB5", 00:15:00.831 "uuid": "6818fe7b-e4fd-4803-a7ff-b017224c1db5", 00:15:00.831 "no_auto_visible": false 00:15:00.831 } 00:15:00.831 } 00:15:00.831 }, 00:15:00.831 { 00:15:00.831 "method": "nvmf_subsystem_add_listener", 00:15:00.831 "params": { 00:15:00.831 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.831 "listen_address": { 00:15:00.831 "trtype": "TCP", 00:15:00.831 "adrfam": "IPv4", 00:15:00.831 "traddr": "10.0.0.3", 00:15:00.831 "trsvcid": "4420" 00:15:00.831 }, 00:15:00.831 "secure_channel": false, 00:15:00.831 "sock_impl": "ssl" 00:15:00.831 } 00:15:00.831 } 00:15:00.831 ] 00:15:00.831 } 00:15:00.831 ] 00:15:00.831 }' 00:15:00.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.831 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=72885 00:15:00.831 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 72885 00:15:00.831 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72885 ']' 00:15:00.831 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.831 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.832 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.832 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:00.832 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.832 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.089 [2024-10-01 13:42:52.731389] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:15:01.089 [2024-10-01 13:42:52.731486] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.089 [2024-10-01 13:42:52.865454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.089 [2024-10-01 13:42:52.924921] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.089 [2024-10-01 13:42:52.924983] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.089 [2024-10-01 13:42:52.924996] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.089 [2024-10-01 13:42:52.925004] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.089 [2024-10-01 13:42:52.925012] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.089 [2024-10-01 13:42:52.925104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.346 [2024-10-01 13:42:53.069602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:01.346 [2024-10-01 13:42:53.128659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.346 [2024-10-01 13:42:53.167741] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:01.346 [2024-10-01 13:42:53.167982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:01.911 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.911 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:01.911 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:01.911 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:01.911 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.171 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.171 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72918 00:15:02.171 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72918 /var/tmp/bdevperf.sock 00:15:02.171 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72918 ']' 00:15:02.171 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.171 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.171 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.171 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.171 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.171 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:02.171 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:02.171 "subsystems": [ 00:15:02.171 { 00:15:02.171 "subsystem": "keyring", 00:15:02.171 "config": [ 00:15:02.171 { 00:15:02.171 "method": "keyring_file_add_key", 00:15:02.171 "params": { 00:15:02.171 "name": "key0", 00:15:02.171 "path": "/tmp/tmp.ifjnsatssg" 00:15:02.171 } 00:15:02.171 } 00:15:02.171 ] 00:15:02.171 }, 00:15:02.171 { 00:15:02.171 "subsystem": "iobuf", 00:15:02.171 "config": [ 00:15:02.171 { 00:15:02.171 "method": "iobuf_set_options", 00:15:02.171 "params": { 00:15:02.171 "small_pool_count": 8192, 00:15:02.171 "large_pool_count": 1024, 00:15:02.171 "small_bufsize": 8192, 00:15:02.171 "large_bufsize": 135168 00:15:02.171 } 00:15:02.171 } 00:15:02.171 ] 00:15:02.171 }, 00:15:02.171 { 00:15:02.171 "subsystem": "sock", 00:15:02.171 "config": [ 00:15:02.171 { 00:15:02.171 "method": "sock_set_default_impl", 00:15:02.171 "params": { 00:15:02.171 "impl_name": "uring" 00:15:02.171 } 00:15:02.171 }, 00:15:02.171 { 00:15:02.171 "method": "sock_impl_set_options", 00:15:02.171 "params": { 00:15:02.171 "impl_name": "ssl", 00:15:02.171 "recv_buf_size": 4096, 00:15:02.171 "send_buf_size": 4096, 00:15:02.171 "enable_recv_pipe": true, 00:15:02.171 "enable_quickack": false, 00:15:02.171 "enable_placement_id": 0, 00:15:02.171 "enable_zerocopy_send_server": true, 00:15:02.171 "enable_zerocopy_send_client": false, 00:15:02.171 "zerocopy_threshold": 0, 00:15:02.171 "tls_version": 0, 00:15:02.171 "enable_ktls": false 00:15:02.171 } 00:15:02.171 }, 00:15:02.171 { 00:15:02.171 "method": "sock_impl_set_options", 00:15:02.171 "params": { 00:15:02.171 "impl_name": "posix", 00:15:02.171 "recv_buf_size": 2097152, 00:15:02.171 "send_buf_size": 2097152, 00:15:02.171 "enable_recv_pipe": true, 00:15:02.171 "enable_quickack": false, 00:15:02.171 "enable_placement_id": 0, 00:15:02.171 "enable_zerocopy_send_server": true, 00:15:02.171 "enable_zerocopy_send_client": false, 00:15:02.171 "zerocopy_threshold": 0, 00:15:02.171 "tls_version": 0, 00:15:02.171 "enable_ktls": false 00:15:02.171 } 00:15:02.171 }, 00:15:02.171 { 00:15:02.171 "method": "sock_impl_set_options", 00:15:02.171 "params": { 00:15:02.171 "impl_name": "uring", 00:15:02.171 "recv_buf_size": 2097152, 00:15:02.171 "send_buf_size": 2097152, 00:15:02.171 "enable_recv_pipe": true, 00:15:02.171 "enable_quickack": false, 00:15:02.171 "enable_placement_id": 0, 00:15:02.171 "enable_zerocopy_send_server": false, 00:15:02.171 "enable_zerocopy_send_client": false, 00:15:02.171 "zerocopy_threshold": 0, 00:15:02.171 "tls_version": 0, 00:15:02.171 "enable_ktls": false 00:15:02.171 } 00:15:02.171 } 00:15:02.171 ] 00:15:02.171 }, 00:15:02.171 { 00:15:02.171 "subsystem": "vmd", 00:15:02.171 "config": [] 00:15:02.171 }, 00:15:02.171 { 00:15:02.171 "subsystem": "accel", 00:15:02.171 "config": [ 00:15:02.171 { 00:15:02.172 "method": "accel_set_options", 00:15:02.172 "params": { 00:15:02.172 "small_cache_size": 128, 00:15:02.172 "large_cache_size": 16, 00:15:02.172 "task_count": 2048, 00:15:02.172 "sequence_count": 2048, 00:15:02.172 "buf_count": 2048 00:15:02.172 } 00:15:02.172 } 00:15:02.172 ] 00:15:02.172 }, 00:15:02.172 { 00:15:02.172 "subsystem": "bdev", 00:15:02.172 "config": [ 00:15:02.172 { 00:15:02.172 "method": "bdev_set_options", 00:15:02.172 "params": { 00:15:02.172 "bdev_io_pool_size": 65535, 00:15:02.172 "bdev_io_cache_size": 256, 00:15:02.172 "bdev_auto_examine": true, 00:15:02.172 "iobuf_small_cache_size": 128, 00:15:02.172 "iobuf_large_cache_size": 16 00:15:02.172 } 00:15:02.172 }, 00:15:02.172 { 00:15:02.172 "method": "bdev_raid_set_options", 00:15:02.172 "params": { 00:15:02.172 "process_window_size_kb": 1024, 00:15:02.172 "process_max_bandwidth_mb_sec": 0 00:15:02.172 } 00:15:02.172 }, 00:15:02.172 { 00:15:02.172 "method": "bdev_iscsi_set_options", 00:15:02.172 "params": { 00:15:02.172 "timeout_sec": 30 00:15:02.172 } 00:15:02.172 }, 00:15:02.172 { 00:15:02.172 "method": "bdev_nvme_set_options", 00:15:02.172 "params": { 00:15:02.172 "action_on_timeout": "none", 00:15:02.172 "timeout_us": 0, 00:15:02.172 "timeout_admin_us": 0, 00:15:02.172 "keep_alive_timeout_ms": 10000, 00:15:02.172 "arbitration_burst": 0, 00:15:02.172 "low_priority_weight": 0, 00:15:02.172 "medium_priority_weight": 0, 00:15:02.172 "high_priority_weight": 0, 00:15:02.172 "nvme_adminq_poll_period_us": 10000, 00:15:02.172 "nvme_ioq_poll_period_us": 0, 00:15:02.172 "io_queue_requests": 512, 00:15:02.172 "delay_cmd_submit": true, 00:15:02.172 "transport_retry_count": 4, 00:15:02.172 "bdev_retry_count": 3, 00:15:02.172 "transport_ack_timeout": 0, 00:15:02.172 "ctrlr_loss_timeout_sec": 0, 00:15:02.172 "reconnect_delay_sec": 0, 00:15:02.172 "fast_io_fail_timeout_sec": 0, 00:15:02.172 "disable_auto_failback": false, 00:15:02.172 "generate_uuids": false, 00:15:02.172 "transport_tos": 0, 00:15:02.172 "nvme_error_stat": false, 00:15:02.172 "rdma_srq_size": 0, 00:15:02.172 "io_path_stat": false, 00:15:02.172 "allow_accel_sequence": false, 00:15:02.172 "rdma_max_cq_size": 0, 00:15:02.172 "rdma_cm_event_timeout_ms": 0, 00:15:02.172 "dhchap_digests": [ 00:15:02.172 "sha256", 00:15:02.172 "sha384", 00:15:02.172 "sha512" 00:15:02.172 ], 00:15:02.172 "dhchap_dhgroups": [ 00:15:02.172 "null", 00:15:02.172 "ffdhe2048", 00:15:02.172 "ffdhe3072", 00:15:02.172 "ffdhe4096", 00:15:02.172 "ffdhe6144", 00:15:02.172 "ffdhe8192" 00:15:02.172 ] 00:15:02.172 } 00:15:02.172 }, 00:15:02.172 { 00:15:02.172 "method": "bdev_nvme_attach_controller", 00:15:02.172 "params": { 00:15:02.172 "name": "nvme0", 00:15:02.172 "trtype": "TCP", 00:15:02.172 "adrfam": "IPv4", 00:15:02.172 "traddr": "10.0.0.3", 00:15:02.172 "trsvcid": "4420", 00:15:02.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.172 "prchk_reftag": false, 00:15:02.172 "prchk_guard": false, 00:15:02.172 "ctrlr_loss_timeout_sec": 0, 00:15:02.172 "reconnect_delay_sec": 0, 00:15:02.172 "fast_io_fail_timeout_sec": 0, 00:15:02.172 "psk": "key0", 00:15:02.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:02.172 "hdgst": false, 00:15:02.172 "ddgst": false, 00:15:02.172 "multipath": "multipath" 00:15:02.172 } 00:15:02.172 }, 00:15:02.172 { 00:15:02.172 "method": "bdev_nvme_set_hotplug", 00:15:02.172 "params": { 00:15:02.172 "period_us": 100000, 00:15:02.172 "enable": false 00:15:02.172 } 00:15:02.172 }, 00:15:02.172 { 00:15:02.172 "method": "bdev_enable_histogram", 00:15:02.172 "params": { 00:15:02.172 "name": "nvme0n1", 00:15:02.172 "enable": true 00:15:02.172 } 00:15:02.172 }, 00:15:02.172 { 00:15:02.172 "method": "bdev_wait_for_examine" 00:15:02.172 } 00:15:02.172 ] 00:15:02.172 }, 00:15:02.172 { 00:15:02.172 "subsystem": "nbd", 00:15:02.172 "config": [] 00:15:02.172 } 00:15:02.172 ] 00:15:02.172 }' 00:15:02.172 [2024-10-01 13:42:53.850562] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:15:02.172 [2024-10-01 13:42:53.851896] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72918 ] 00:15:02.172 [2024-10-01 13:42:53.990924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.438 [2024-10-01 13:42:54.048977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.438 [2024-10-01 13:42:54.160173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:02.438 [2024-10-01 13:42:54.192814] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:03.004 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.004 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:03.004 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:03.004 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:03.262 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.262 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:03.520 Running I/O for 1 seconds... 00:15:04.458 3814.00 IOPS, 14.90 MiB/s 00:15:04.458 Latency(us) 00:15:04.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.458 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:04.458 Verification LBA range: start 0x0 length 0x2000 00:15:04.458 nvme0n1 : 1.02 3873.37 15.13 0.00 0.00 32730.57 6076.97 28597.53 00:15:04.458 =================================================================================================================== 00:15:04.458 Total : 3873.37 15.13 0.00 0.00 32730.57 6076.97 28597.53 00:15:04.458 { 00:15:04.458 "results": [ 00:15:04.458 { 00:15:04.458 "job": "nvme0n1", 00:15:04.458 "core_mask": "0x2", 00:15:04.458 "workload": "verify", 00:15:04.458 "status": "finished", 00:15:04.458 "verify_range": { 00:15:04.458 "start": 0, 00:15:04.458 "length": 8192 00:15:04.458 }, 00:15:04.458 "queue_depth": 128, 00:15:04.458 "io_size": 4096, 00:15:04.458 "runtime": 1.017977, 00:15:04.458 "iops": 3873.3684552794416, 00:15:04.458 "mibps": 15.130345528435319, 00:15:04.458 "io_failed": 0, 00:15:04.458 "io_timeout": 0, 00:15:04.458 "avg_latency_us": 32730.57287436885, 00:15:04.458 "min_latency_us": 6076.9745454545455, 00:15:04.458 "max_latency_us": 28597.52727272727 00:15:04.458 } 00:15:04.458 ], 00:15:04.458 "core_count": 1 00:15:04.458 } 00:15:04.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:04.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:04.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:04.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:15:04.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:15:04.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:04.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:04.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:04.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:04.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:04.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:04.458 nvmf_trace.0 00:15:04.720 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:15:04.720 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72918 00:15:04.720 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72918 ']' 00:15:04.720 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72918 00:15:04.720 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:04.720 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:04.720 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72918 00:15:04.720 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:04.720 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:04.720 killing process with pid 72918 00:15:04.720 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72918' 00:15:04.720 Received shutdown signal, test time was about 1.000000 seconds 00:15:04.720 00:15:04.720 Latency(us) 00:15:04.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.720 =================================================================================================================== 00:15:04.720 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:04.720 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72918 00:15:04.720 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72918 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:04.978 rmmod nvme_tcp 00:15:04.978 rmmod nvme_fabrics 00:15:04.978 rmmod nvme_keyring 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 72885 ']' 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 72885 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72885 ']' 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72885 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72885 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:04.978 killing process with pid 72885 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72885' 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72885 00:15:04.978 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72885 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:05.236 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:05.236 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:05.236 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:05.236 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:05.236 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TSY4SWMLTK /tmp/tmp.CS4Y3s6kQV /tmp/tmp.ifjnsatssg 00:15:05.494 00:15:05.494 real 1m27.291s 00:15:05.494 user 2m25.404s 00:15:05.494 sys 0m26.419s 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 ************************************ 00:15:05.494 END TEST nvmf_tls 00:15:05.494 ************************************ 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 ************************************ 00:15:05.494 START TEST nvmf_fips 00:15:05.494 ************************************ 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:05.494 * Looking for test storage... 00:15:05.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:15:05.494 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:05.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.767 --rc genhtml_branch_coverage=1 00:15:05.767 --rc genhtml_function_coverage=1 00:15:05.767 --rc genhtml_legend=1 00:15:05.767 --rc geninfo_all_blocks=1 00:15:05.767 --rc geninfo_unexecuted_blocks=1 00:15:05.767 00:15:05.767 ' 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:05.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.767 --rc genhtml_branch_coverage=1 00:15:05.767 --rc genhtml_function_coverage=1 00:15:05.767 --rc genhtml_legend=1 00:15:05.767 --rc geninfo_all_blocks=1 00:15:05.767 --rc geninfo_unexecuted_blocks=1 00:15:05.767 00:15:05.767 ' 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:05.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.767 --rc genhtml_branch_coverage=1 00:15:05.767 --rc genhtml_function_coverage=1 00:15:05.767 --rc genhtml_legend=1 00:15:05.767 --rc geninfo_all_blocks=1 00:15:05.767 --rc geninfo_unexecuted_blocks=1 00:15:05.767 00:15:05.767 ' 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:05.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.767 --rc genhtml_branch_coverage=1 00:15:05.767 --rc genhtml_function_coverage=1 00:15:05.767 --rc genhtml_legend=1 00:15:05.767 --rc geninfo_all_blocks=1 00:15:05.767 --rc geninfo_unexecuted_blocks=1 00:15:05.767 00:15:05.767 ' 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:05.767 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:05.768 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:15:05.768 Error setting digest 00:15:05.768 40F2F52C377F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:05.768 40F2F52C377F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:05.768 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:05.769 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:05.769 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:05.769 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:05.769 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.769 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:05.769 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:05.769 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:05.769 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:05.769 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:05.769 Cannot find device "nvmf_init_br" 00:15:05.769 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:05.769 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:05.769 Cannot find device "nvmf_init_br2" 00:15:05.769 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:05.769 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:06.048 Cannot find device "nvmf_tgt_br" 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:06.048 Cannot find device "nvmf_tgt_br2" 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:06.048 Cannot find device "nvmf_init_br" 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:06.048 Cannot find device "nvmf_init_br2" 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:06.048 Cannot find device "nvmf_tgt_br" 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:06.048 Cannot find device "nvmf_tgt_br2" 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:06.048 Cannot find device "nvmf_br" 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:06.048 Cannot find device "nvmf_init_if" 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:06.048 Cannot find device "nvmf_init_if2" 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:06.048 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:06.048 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:06.048 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:06.306 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:06.306 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:15:06.306 00:15:06.306 --- 10.0.0.3 ping statistics --- 00:15:06.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.306 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:06.306 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:06.306 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:15:06.306 00:15:06.306 --- 10.0.0.4 ping statistics --- 00:15:06.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.306 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:06.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:06.306 00:15:06.306 --- 10.0.0.1 ping statistics --- 00:15:06.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.306 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:06.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:15:06.306 00:15:06.306 --- 10.0.0.2 ping statistics --- 00:15:06.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.306 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # return 0 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=73229 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 73229 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73229 ']' 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:06.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:06.306 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:06.306 [2024-10-01 13:42:58.092496] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:15:06.307 [2024-10-01 13:42:58.092602] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.565 [2024-10-01 13:42:58.229836] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.565 [2024-10-01 13:42:58.290511] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.565 [2024-10-01 13:42:58.290571] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.565 [2024-10-01 13:42:58.290583] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.565 [2024-10-01 13:42:58.290592] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.565 [2024-10-01 13:42:58.290599] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.565 [2024-10-01 13:42:58.290627] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.565 [2024-10-01 13:42:58.321666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:06.565 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:06.565 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:06.565 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:06.565 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:06.565 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:06.565 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.565 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:06.565 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:06.565 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:06.565 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.j2P 00:15:06.565 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:06.565 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.j2P 00:15:06.823 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.j2P 00:15:06.823 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.j2P 00:15:06.823 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:07.081 [2024-10-01 13:42:58.721314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.081 [2024-10-01 13:42:58.737240] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:07.081 [2024-10-01 13:42:58.737479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:07.082 malloc0 00:15:07.082 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:07.082 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73263 00:15:07.082 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:07.082 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73263 /var/tmp/bdevperf.sock 00:15:07.082 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73263 ']' 00:15:07.082 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.082 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:07.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.082 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.082 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:07.082 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:07.082 [2024-10-01 13:42:58.899239] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:15:07.082 [2024-10-01 13:42:58.899384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73263 ] 00:15:07.341 [2024-10-01 13:42:59.045130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.341 [2024-10-01 13:42:59.104593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.341 [2024-10-01 13:42:59.134830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:08.276 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:08.277 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:08.277 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.j2P 00:15:08.535 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:08.794 [2024-10-01 13:43:00.594468] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:09.053 TLSTESTn1 00:15:09.053 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:09.053 Running I/O for 10 seconds... 00:15:19.370 3354.00 IOPS, 13.10 MiB/s 3537.00 IOPS, 13.82 MiB/s 3621.00 IOPS, 14.14 MiB/s 3687.00 IOPS, 14.40 MiB/s 3695.00 IOPS, 14.43 MiB/s 3657.67 IOPS, 14.29 MiB/s 3682.29 IOPS, 14.38 MiB/s 3665.12 IOPS, 14.32 MiB/s 3673.56 IOPS, 14.35 MiB/s 3677.30 IOPS, 14.36 MiB/s 00:15:19.370 Latency(us) 00:15:19.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.370 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:19.370 Verification LBA range: start 0x0 length 0x2000 00:15:19.370 TLSTESTn1 : 10.02 3683.46 14.39 0.00 0.00 34686.89 5749.29 35031.97 00:15:19.370 =================================================================================================================== 00:15:19.370 Total : 3683.46 14.39 0.00 0.00 34686.89 5749.29 35031.97 00:15:19.370 { 00:15:19.370 "results": [ 00:15:19.370 { 00:15:19.370 "job": "TLSTESTn1", 00:15:19.370 "core_mask": "0x4", 00:15:19.370 "workload": "verify", 00:15:19.370 "status": "finished", 00:15:19.370 "verify_range": { 00:15:19.370 "start": 0, 00:15:19.370 "length": 8192 00:15:19.370 }, 00:15:19.370 "queue_depth": 128, 00:15:19.370 "io_size": 4096, 00:15:19.370 "runtime": 10.016941, 00:15:19.370 "iops": 3683.459850666985, 00:15:19.370 "mibps": 14.38851504166791, 00:15:19.370 "io_failed": 0, 00:15:19.370 "io_timeout": 0, 00:15:19.370 "avg_latency_us": 34686.88507634274, 00:15:19.370 "min_latency_us": 5749.294545454545, 00:15:19.370 "max_latency_us": 35031.97090909091 00:15:19.370 } 00:15:19.370 ], 00:15:19.370 "core_count": 1 00:15:19.370 } 00:15:19.370 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:19.370 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:19.370 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:15:19.370 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:15:19.370 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:19.370 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:19.370 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:19.370 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:19.370 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:19.370 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:19.370 nvmf_trace.0 00:15:19.370 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:15:19.370 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73263 00:15:19.370 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73263 ']' 00:15:19.370 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73263 00:15:19.370 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:19.370 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.370 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73263 00:15:19.370 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:19.370 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:19.370 killing process with pid 73263 00:15:19.370 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73263' 00:15:19.370 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73263 00:15:19.370 Received shutdown signal, test time was about 10.000000 seconds 00:15:19.370 00:15:19.370 Latency(us) 00:15:19.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.370 =================================================================================================================== 00:15:19.370 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:19.370 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73263 00:15:19.370 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:19.370 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:19.370 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:19.630 rmmod nvme_tcp 00:15:19.630 rmmod nvme_fabrics 00:15:19.630 rmmod nvme_keyring 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 73229 ']' 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 73229 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73229 ']' 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73229 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73229 00:15:19.630 killing process with pid 73229 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73229' 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73229 00:15:19.630 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73229 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:19.889 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.j2P 00:15:20.148 00:15:20.148 real 0m14.560s 00:15:20.148 user 0m20.892s 00:15:20.148 sys 0m5.719s 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:20.148 ************************************ 00:15:20.148 END TEST nvmf_fips 00:15:20.148 ************************************ 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:20.148 ************************************ 00:15:20.148 START TEST nvmf_control_msg_list 00:15:20.148 ************************************ 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:20.148 * Looking for test storage... 00:15:20.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:20.148 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:20.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.149 --rc genhtml_branch_coverage=1 00:15:20.149 --rc genhtml_function_coverage=1 00:15:20.149 --rc genhtml_legend=1 00:15:20.149 --rc geninfo_all_blocks=1 00:15:20.149 --rc geninfo_unexecuted_blocks=1 00:15:20.149 00:15:20.149 ' 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:20.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.149 --rc genhtml_branch_coverage=1 00:15:20.149 --rc genhtml_function_coverage=1 00:15:20.149 --rc genhtml_legend=1 00:15:20.149 --rc geninfo_all_blocks=1 00:15:20.149 --rc geninfo_unexecuted_blocks=1 00:15:20.149 00:15:20.149 ' 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:20.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.149 --rc genhtml_branch_coverage=1 00:15:20.149 --rc genhtml_function_coverage=1 00:15:20.149 --rc genhtml_legend=1 00:15:20.149 --rc geninfo_all_blocks=1 00:15:20.149 --rc geninfo_unexecuted_blocks=1 00:15:20.149 00:15:20.149 ' 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:20.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.149 --rc genhtml_branch_coverage=1 00:15:20.149 --rc genhtml_function_coverage=1 00:15:20.149 --rc genhtml_legend=1 00:15:20.149 --rc geninfo_all_blocks=1 00:15:20.149 --rc geninfo_unexecuted_blocks=1 00:15:20.149 00:15:20.149 ' 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:20.149 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:20.149 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:20.150 Cannot find device "nvmf_init_br" 00:15:20.150 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:20.150 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:20.150 Cannot find device "nvmf_init_br2" 00:15:20.150 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:20.150 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:20.408 Cannot find device "nvmf_tgt_br" 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:20.408 Cannot find device "nvmf_tgt_br2" 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:20.408 Cannot find device "nvmf_init_br" 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:20.408 Cannot find device "nvmf_init_br2" 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:20.408 Cannot find device "nvmf_tgt_br" 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:20.408 Cannot find device "nvmf_tgt_br2" 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:20.408 Cannot find device "nvmf_br" 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:20.408 Cannot find device "nvmf_init_if" 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:20.408 Cannot find device "nvmf_init_if2" 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:20.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:20.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:20.408 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:20.409 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:20.409 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:20.409 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:20.409 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:20.409 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:20.409 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:20.409 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:20.668 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:20.668 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:15:20.668 00:15:20.668 --- 10.0.0.3 ping statistics --- 00:15:20.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.668 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:20.668 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:20.668 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:15:20.668 00:15:20.668 --- 10.0.0.4 ping statistics --- 00:15:20.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.668 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:20.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:20.668 00:15:20.668 --- 10.0.0.1 ping statistics --- 00:15:20.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.668 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:20.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:15:20.668 00:15:20.668 --- 10.0.0.2 ping statistics --- 00:15:20.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.668 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # return 0 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=73651 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 73651 00:15:20.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 73651 ']' 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.668 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:20.668 [2024-10-01 13:43:12.473969] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:15:20.668 [2024-10-01 13:43:12.474852] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.926 [2024-10-01 13:43:12.616477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.926 [2024-10-01 13:43:12.688190] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.926 [2024-10-01 13:43:12.688256] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.926 [2024-10-01 13:43:12.688269] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.926 [2024-10-01 13:43:12.688280] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.926 [2024-10-01 13:43:12.688289] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.926 [2024-10-01 13:43:12.688326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.926 [2024-10-01 13:43:12.721397] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:21.921 [2024-10-01 13:43:13.530713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:21.921 Malloc0 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.921 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:21.921 [2024-10-01 13:43:13.577301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:21.922 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.922 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73683 00:15:21.922 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:21.922 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73684 00:15:21.922 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:21.922 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73685 00:15:21.922 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:21.922 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73683 00:15:21.922 [2024-10-01 13:43:13.745641] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:21.922 [2024-10-01 13:43:13.755709] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:21.922 [2024-10-01 13:43:13.765861] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:23.297 Initializing NVMe Controllers 00:15:23.297 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:23.297 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:23.297 Initialization complete. Launching workers. 00:15:23.297 ======================================================== 00:15:23.297 Latency(us) 00:15:23.297 Device Information : IOPS MiB/s Average min max 00:15:23.297 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3285.00 12.83 303.99 141.76 687.48 00:15:23.297 ======================================================== 00:15:23.298 Total : 3285.00 12.83 303.99 141.76 687.48 00:15:23.298 00:15:23.298 Initializing NVMe Controllers 00:15:23.298 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:23.298 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:23.298 Initialization complete. Launching workers. 00:15:23.298 ======================================================== 00:15:23.298 Latency(us) 00:15:23.298 Device Information : IOPS MiB/s Average min max 00:15:23.298 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3366.00 13.15 296.67 155.23 612.23 00:15:23.298 ======================================================== 00:15:23.298 Total : 3366.00 13.15 296.67 155.23 612.23 00:15:23.298 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73684 00:15:23.298 Initializing NVMe Controllers 00:15:23.298 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:23.298 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:23.298 Initialization complete. Launching workers. 00:15:23.298 ======================================================== 00:15:23.298 Latency(us) 00:15:23.298 Device Information : IOPS MiB/s Average min max 00:15:23.298 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3387.00 13.23 294.81 132.45 821.19 00:15:23.298 ======================================================== 00:15:23.298 Total : 3387.00 13.23 294.81 132.45 821.19 00:15:23.298 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73685 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:23.298 rmmod nvme_tcp 00:15:23.298 rmmod nvme_fabrics 00:15:23.298 rmmod nvme_keyring 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 73651 ']' 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 73651 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 73651 ']' 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 73651 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73651 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:23.298 killing process with pid 73651 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73651' 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 73651 00:15:23.298 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 73651 00:15:23.298 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:23.298 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:23.298 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:23.298 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:23.298 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:15:23.298 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:23.298 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:15:23.298 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:23.298 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:23.298 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:23.298 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:23.298 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:23.558 00:15:23.558 real 0m3.559s 00:15:23.558 user 0m5.675s 00:15:23.558 sys 0m1.238s 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:23.558 ************************************ 00:15:23.558 END TEST nvmf_control_msg_list 00:15:23.558 ************************************ 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.558 ************************************ 00:15:23.558 START TEST nvmf_wait_for_buf 00:15:23.558 ************************************ 00:15:23.558 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:23.818 * Looking for test storage... 00:15:23.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:23.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.818 --rc genhtml_branch_coverage=1 00:15:23.818 --rc genhtml_function_coverage=1 00:15:23.818 --rc genhtml_legend=1 00:15:23.818 --rc geninfo_all_blocks=1 00:15:23.818 --rc geninfo_unexecuted_blocks=1 00:15:23.818 00:15:23.818 ' 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:23.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.818 --rc genhtml_branch_coverage=1 00:15:23.818 --rc genhtml_function_coverage=1 00:15:23.818 --rc genhtml_legend=1 00:15:23.818 --rc geninfo_all_blocks=1 00:15:23.818 --rc geninfo_unexecuted_blocks=1 00:15:23.818 00:15:23.818 ' 00:15:23.818 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:23.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.819 --rc genhtml_branch_coverage=1 00:15:23.819 --rc genhtml_function_coverage=1 00:15:23.819 --rc genhtml_legend=1 00:15:23.819 --rc geninfo_all_blocks=1 00:15:23.819 --rc geninfo_unexecuted_blocks=1 00:15:23.819 00:15:23.819 ' 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:23.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.819 --rc genhtml_branch_coverage=1 00:15:23.819 --rc genhtml_function_coverage=1 00:15:23.819 --rc genhtml_legend=1 00:15:23.819 --rc geninfo_all_blocks=1 00:15:23.819 --rc geninfo_unexecuted_blocks=1 00:15:23.819 00:15:23.819 ' 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:23.819 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:23.819 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:23.820 Cannot find device "nvmf_init_br" 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:23.820 Cannot find device "nvmf_init_br2" 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:23.820 Cannot find device "nvmf_tgt_br" 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:23.820 Cannot find device "nvmf_tgt_br2" 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:23.820 Cannot find device "nvmf_init_br" 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:23.820 Cannot find device "nvmf_init_br2" 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:23.820 Cannot find device "nvmf_tgt_br" 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:23.820 Cannot find device "nvmf_tgt_br2" 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:23.820 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:24.080 Cannot find device "nvmf_br" 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:24.080 Cannot find device "nvmf_init_if" 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:24.080 Cannot find device "nvmf_init_if2" 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:24.080 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:24.080 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:24.080 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:24.339 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:24.339 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:24.339 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:24.339 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:24.339 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:24.339 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:24.339 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:15:24.339 00:15:24.339 --- 10.0.0.3 ping statistics --- 00:15:24.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.339 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:15:24.339 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:24.339 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:24.339 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:15:24.339 00:15:24.339 --- 10.0.0.4 ping statistics --- 00:15:24.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.339 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:24.339 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:24.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:15:24.339 00:15:24.339 --- 10.0.0.1 ping statistics --- 00:15:24.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.339 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:24.339 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:24.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:15:24.339 00:15:24.339 --- 10.0.0.2 ping statistics --- 00:15:24.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.339 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:15:24.339 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.339 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # return 0 00:15:24.339 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:24.340 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.340 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:24.340 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:24.340 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.340 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:24.340 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:24.340 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:24.340 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:24.340 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:24.340 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:24.340 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=73922 00:15:24.340 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:24.340 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 73922 00:15:24.340 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 73922 ']' 00:15:24.340 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.340 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.340 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.340 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.340 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:24.340 [2024-10-01 13:43:16.072603] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:15:24.340 [2024-10-01 13:43:16.072734] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.599 [2024-10-01 13:43:16.217707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.599 [2024-10-01 13:43:16.293200] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.599 [2024-10-01 13:43:16.293263] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.599 [2024-10-01 13:43:16.293275] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.599 [2024-10-01 13:43:16.293283] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.599 [2024-10-01 13:43:16.293290] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.599 [2024-10-01 13:43:16.293320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:25.583 [2024-10-01 13:43:17.151580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:25.583 Malloc0 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:25.583 [2024-10-01 13:43:17.193217] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:25.583 [2024-10-01 13:43:17.217310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.583 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:25.583 [2024-10-01 13:43:17.402673] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:26.958 Initializing NVMe Controllers 00:15:26.958 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:26.958 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:26.958 Initialization complete. Launching workers. 00:15:26.958 ======================================================== 00:15:26.958 Latency(us) 00:15:26.958 Device Information : IOPS MiB/s Average min max 00:15:26.958 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.98 62.50 8000.37 7909.93 8238.81 00:15:26.958 ======================================================== 00:15:26.958 Total : 499.98 62.50 8000.37 7909.93 8238.81 00:15:26.958 00:15:26.958 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:26.958 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:26.958 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.958 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:26.958 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.958 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:15:26.958 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:15:26.958 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:26.958 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:26.958 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:26.958 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:26.958 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:26.958 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:26.958 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:26.958 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:26.958 rmmod nvme_tcp 00:15:26.958 rmmod nvme_fabrics 00:15:27.216 rmmod nvme_keyring 00:15:27.216 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:27.216 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:27.216 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:27.216 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 73922 ']' 00:15:27.216 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 73922 00:15:27.216 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 73922 ']' 00:15:27.216 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 73922 00:15:27.216 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:15:27.216 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:27.216 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73922 00:15:27.216 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:27.216 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:27.216 killing process with pid 73922 00:15:27.216 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73922' 00:15:27.216 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 73922 00:15:27.216 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 73922 00:15:27.216 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:27.216 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:27.216 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:27.216 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:27.216 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:15:27.216 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:27.216 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:15:27.216 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:27.216 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:27.217 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:27.474 00:15:27.474 real 0m3.929s 00:15:27.474 user 0m3.494s 00:15:27.474 sys 0m0.780s 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:27.474 ************************************ 00:15:27.474 END TEST nvmf_wait_for_buf 00:15:27.474 ************************************ 00:15:27.474 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:27.731 13:43:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:15:27.731 13:43:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:15:27.731 13:43:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:27.731 00:15:27.731 real 6m15.679s 00:15:27.731 user 13m5.650s 00:15:27.731 sys 1m13.796s 00:15:27.731 13:43:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:27.731 13:43:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.731 ************************************ 00:15:27.731 END TEST nvmf_target_extra 00:15:27.731 ************************************ 00:15:27.731 13:43:19 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:27.731 13:43:19 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:27.731 13:43:19 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:27.731 13:43:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:27.731 ************************************ 00:15:27.731 START TEST nvmf_host 00:15:27.731 ************************************ 00:15:27.731 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:27.731 * Looking for test storage... 00:15:27.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:27.731 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:27.731 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:15:27.731 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:27.731 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:27.731 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.731 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:27.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.732 --rc genhtml_branch_coverage=1 00:15:27.732 --rc genhtml_function_coverage=1 00:15:27.732 --rc genhtml_legend=1 00:15:27.732 --rc geninfo_all_blocks=1 00:15:27.732 --rc geninfo_unexecuted_blocks=1 00:15:27.732 00:15:27.732 ' 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:27.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.732 --rc genhtml_branch_coverage=1 00:15:27.732 --rc genhtml_function_coverage=1 00:15:27.732 --rc genhtml_legend=1 00:15:27.732 --rc geninfo_all_blocks=1 00:15:27.732 --rc geninfo_unexecuted_blocks=1 00:15:27.732 00:15:27.732 ' 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:27.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.732 --rc genhtml_branch_coverage=1 00:15:27.732 --rc genhtml_function_coverage=1 00:15:27.732 --rc genhtml_legend=1 00:15:27.732 --rc geninfo_all_blocks=1 00:15:27.732 --rc geninfo_unexecuted_blocks=1 00:15:27.732 00:15:27.732 ' 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:27.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.732 --rc genhtml_branch_coverage=1 00:15:27.732 --rc genhtml_function_coverage=1 00:15:27.732 --rc genhtml_legend=1 00:15:27.732 --rc geninfo_all_blocks=1 00:15:27.732 --rc geninfo_unexecuted_blocks=1 00:15:27.732 00:15:27.732 ' 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.732 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.732 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.989 ************************************ 00:15:27.989 START TEST nvmf_identify 00:15:27.989 ************************************ 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:27.989 * Looking for test storage... 00:15:27.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.989 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:27.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.990 --rc genhtml_branch_coverage=1 00:15:27.990 --rc genhtml_function_coverage=1 00:15:27.990 --rc genhtml_legend=1 00:15:27.990 --rc geninfo_all_blocks=1 00:15:27.990 --rc geninfo_unexecuted_blocks=1 00:15:27.990 00:15:27.990 ' 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:27.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.990 --rc genhtml_branch_coverage=1 00:15:27.990 --rc genhtml_function_coverage=1 00:15:27.990 --rc genhtml_legend=1 00:15:27.990 --rc geninfo_all_blocks=1 00:15:27.990 --rc geninfo_unexecuted_blocks=1 00:15:27.990 00:15:27.990 ' 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:27.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.990 --rc genhtml_branch_coverage=1 00:15:27.990 --rc genhtml_function_coverage=1 00:15:27.990 --rc genhtml_legend=1 00:15:27.990 --rc geninfo_all_blocks=1 00:15:27.990 --rc geninfo_unexecuted_blocks=1 00:15:27.990 00:15:27.990 ' 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:27.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.990 --rc genhtml_branch_coverage=1 00:15:27.990 --rc genhtml_function_coverage=1 00:15:27.990 --rc genhtml_legend=1 00:15:27.990 --rc geninfo_all_blocks=1 00:15:27.990 --rc geninfo_unexecuted_blocks=1 00:15:27.990 00:15:27.990 ' 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.990 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:27.990 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:27.991 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:27.991 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:27.991 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:27.991 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:27.991 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.991 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:27.991 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:27.991 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:27.991 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:27.991 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:27.991 Cannot find device "nvmf_init_br" 00:15:27.991 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:27.991 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:27.991 Cannot find device "nvmf_init_br2" 00:15:27.991 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:27.991 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:28.328 Cannot find device "nvmf_tgt_br" 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:28.328 Cannot find device "nvmf_tgt_br2" 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:28.328 Cannot find device "nvmf_init_br" 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:28.328 Cannot find device "nvmf_init_br2" 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:28.328 Cannot find device "nvmf_tgt_br" 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:28.328 Cannot find device "nvmf_tgt_br2" 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:28.328 Cannot find device "nvmf_br" 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:28.328 Cannot find device "nvmf_init_if" 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:28.328 Cannot find device "nvmf_init_if2" 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:28.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:28.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:28.328 13:43:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:28.328 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:28.328 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:28.328 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:28.328 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:28.328 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:28.328 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:28.328 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:28.328 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:28.328 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:28.328 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:28.328 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:28.328 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:28.640 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:28.640 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:15:28.640 00:15:28.640 --- 10.0.0.3 ping statistics --- 00:15:28.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.640 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:28.640 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:28.640 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:15:28.640 00:15:28.640 --- 10.0.0.4 ping statistics --- 00:15:28.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.640 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:28.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:28.640 00:15:28.640 --- 10.0.0.1 ping statistics --- 00:15:28.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.640 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:28.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:28.640 00:15:28.640 --- 10.0.0.2 ping statistics --- 00:15:28.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.640 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # return 0 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74246 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74246 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 74246 ']' 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:28.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.640 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:28.641 13:43:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:28.641 [2024-10-01 13:43:20.345769] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:15:28.641 [2024-10-01 13:43:20.345884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.641 [2024-10-01 13:43:20.492604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:28.898 [2024-10-01 13:43:20.559102] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.898 [2024-10-01 13:43:20.559160] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.898 [2024-10-01 13:43:20.559172] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.898 [2024-10-01 13:43:20.559180] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.898 [2024-10-01 13:43:20.559188] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.898 [2024-10-01 13:43:20.559254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.898 [2024-10-01 13:43:20.559370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.898 [2024-10-01 13:43:20.560081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:28.898 [2024-10-01 13:43:20.560105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.898 [2024-10-01 13:43:20.590949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:29.829 [2024-10-01 13:43:21.420682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:29.829 Malloc0 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:29.829 [2024-10-01 13:43:21.491781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:29.829 [ 00:15:29.829 { 00:15:29.829 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:29.829 "subtype": "Discovery", 00:15:29.829 "listen_addresses": [ 00:15:29.829 { 00:15:29.829 "trtype": "TCP", 00:15:29.829 "adrfam": "IPv4", 00:15:29.829 "traddr": "10.0.0.3", 00:15:29.829 "trsvcid": "4420" 00:15:29.829 } 00:15:29.829 ], 00:15:29.829 "allow_any_host": true, 00:15:29.829 "hosts": [] 00:15:29.829 }, 00:15:29.829 { 00:15:29.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.829 "subtype": "NVMe", 00:15:29.829 "listen_addresses": [ 00:15:29.829 { 00:15:29.829 "trtype": "TCP", 00:15:29.829 "adrfam": "IPv4", 00:15:29.829 "traddr": "10.0.0.3", 00:15:29.829 "trsvcid": "4420" 00:15:29.829 } 00:15:29.829 ], 00:15:29.829 "allow_any_host": true, 00:15:29.829 "hosts": [], 00:15:29.829 "serial_number": "SPDK00000000000001", 00:15:29.829 "model_number": "SPDK bdev Controller", 00:15:29.829 "max_namespaces": 32, 00:15:29.829 "min_cntlid": 1, 00:15:29.829 "max_cntlid": 65519, 00:15:29.829 "namespaces": [ 00:15:29.829 { 00:15:29.829 "nsid": 1, 00:15:29.829 "bdev_name": "Malloc0", 00:15:29.829 "name": "Malloc0", 00:15:29.829 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:29.829 "eui64": "ABCDEF0123456789", 00:15:29.829 "uuid": "9635788c-f3aa-450f-8f26-c65c20d1a9f2" 00:15:29.829 } 00:15:29.829 ] 00:15:29.829 } 00:15:29.829 ] 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.829 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:29.829 [2024-10-01 13:43:21.540199] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:15:29.829 [2024-10-01 13:43:21.540267] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74281 ] 00:15:29.829 [2024-10-01 13:43:21.684948] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:29.829 [2024-10-01 13:43:21.685024] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:29.829 [2024-10-01 13:43:21.685032] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:29.829 [2024-10-01 13:43:21.685046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:29.829 [2024-10-01 13:43:21.685058] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:29.829 [2024-10-01 13:43:21.685394] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:29.830 [2024-10-01 13:43:21.685467] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1370750 0 00:15:30.090 [2024-10-01 13:43:21.697565] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:30.090 [2024-10-01 13:43:21.697595] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:30.090 [2024-10-01 13:43:21.697603] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:30.090 [2024-10-01 13:43:21.697607] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:30.090 [2024-10-01 13:43:21.697648] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.090 [2024-10-01 13:43:21.697657] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.090 [2024-10-01 13:43:21.697661] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1370750) 00:15:30.090 [2024-10-01 13:43:21.697677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:30.090 [2024-10-01 13:43:21.697713] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4840, cid 0, qid 0 00:15:30.090 [2024-10-01 13:43:21.705568] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.090 [2024-10-01 13:43:21.705597] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.090 [2024-10-01 13:43:21.705603] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.090 [2024-10-01 13:43:21.705609] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4840) on tqpair=0x1370750 00:15:30.090 [2024-10-01 13:43:21.705620] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:30.090 [2024-10-01 13:43:21.705630] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:30.090 [2024-10-01 13:43:21.705637] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:30.090 [2024-10-01 13:43:21.705656] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.090 [2024-10-01 13:43:21.705661] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.090 [2024-10-01 13:43:21.705666] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1370750) 00:15:30.090 [2024-10-01 13:43:21.705677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.090 [2024-10-01 13:43:21.705710] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4840, cid 0, qid 0 00:15:30.090 [2024-10-01 13:43:21.705775] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.090 [2024-10-01 13:43:21.705790] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.090 [2024-10-01 13:43:21.705798] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.090 [2024-10-01 13:43:21.705806] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4840) on tqpair=0x1370750 00:15:30.090 [2024-10-01 13:43:21.705814] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:30.090 [2024-10-01 13:43:21.705823] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:30.090 [2024-10-01 13:43:21.705833] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.090 [2024-10-01 13:43:21.705838] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.090 [2024-10-01 13:43:21.705842] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1370750) 00:15:30.090 [2024-10-01 13:43:21.705851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.090 [2024-10-01 13:43:21.705877] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4840, cid 0, qid 0 00:15:30.090 [2024-10-01 13:43:21.705926] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.090 [2024-10-01 13:43:21.705934] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.090 [2024-10-01 13:43:21.705938] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.090 [2024-10-01 13:43:21.705942] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4840) on tqpair=0x1370750 00:15:30.090 [2024-10-01 13:43:21.705948] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:30.090 [2024-10-01 13:43:21.705957] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:30.090 [2024-10-01 13:43:21.705965] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.090 [2024-10-01 13:43:21.705970] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.090 [2024-10-01 13:43:21.705974] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1370750) 00:15:30.090 [2024-10-01 13:43:21.705982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.090 [2024-10-01 13:43:21.706002] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4840, cid 0, qid 0 00:15:30.090 [2024-10-01 13:43:21.706053] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.090 [2024-10-01 13:43:21.706060] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.090 [2024-10-01 13:43:21.706064] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.090 [2024-10-01 13:43:21.706068] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4840) on tqpair=0x1370750 00:15:30.090 [2024-10-01 13:43:21.706074] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:30.090 [2024-10-01 13:43:21.706085] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.090 [2024-10-01 13:43:21.706090] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.090 [2024-10-01 13:43:21.706094] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1370750) 00:15:30.090 [2024-10-01 13:43:21.706102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.090 [2024-10-01 13:43:21.706121] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4840, cid 0, qid 0 00:15:30.090 [2024-10-01 13:43:21.706164] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.090 [2024-10-01 13:43:21.706171] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.090 [2024-10-01 13:43:21.706175] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.090 [2024-10-01 13:43:21.706180] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4840) on tqpair=0x1370750 00:15:30.091 [2024-10-01 13:43:21.706185] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:30.091 [2024-10-01 13:43:21.706191] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:30.091 [2024-10-01 13:43:21.706200] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:30.091 [2024-10-01 13:43:21.706310] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:30.091 [2024-10-01 13:43:21.706334] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:30.091 [2024-10-01 13:43:21.706351] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.706359] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.706366] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1370750) 00:15:30.091 [2024-10-01 13:43:21.706375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.091 [2024-10-01 13:43:21.706405] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4840, cid 0, qid 0 00:15:30.091 [2024-10-01 13:43:21.706463] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.091 [2024-10-01 13:43:21.706477] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.091 [2024-10-01 13:43:21.706485] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.706490] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4840) on tqpair=0x1370750 00:15:30.091 [2024-10-01 13:43:21.706496] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:30.091 [2024-10-01 13:43:21.706507] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.706512] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.706517] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1370750) 00:15:30.091 [2024-10-01 13:43:21.706525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.091 [2024-10-01 13:43:21.706563] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4840, cid 0, qid 0 00:15:30.091 [2024-10-01 13:43:21.706615] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.091 [2024-10-01 13:43:21.706623] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.091 [2024-10-01 13:43:21.706627] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.706631] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4840) on tqpair=0x1370750 00:15:30.091 [2024-10-01 13:43:21.706637] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:30.091 [2024-10-01 13:43:21.706642] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:30.091 [2024-10-01 13:43:21.706651] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:30.091 [2024-10-01 13:43:21.706667] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:30.091 [2024-10-01 13:43:21.706680] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.706685] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1370750) 00:15:30.091 [2024-10-01 13:43:21.706693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.091 [2024-10-01 13:43:21.706722] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4840, cid 0, qid 0 00:15:30.091 [2024-10-01 13:43:21.706809] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:30.091 [2024-10-01 13:43:21.706818] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:30.091 [2024-10-01 13:43:21.706823] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.706831] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1370750): datao=0, datal=4096, cccid=0 00:15:30.091 [2024-10-01 13:43:21.706840] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13d4840) on tqpair(0x1370750): expected_datao=0, payload_size=4096 00:15:30.091 [2024-10-01 13:43:21.706848] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.706862] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.706871] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.706886] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.091 [2024-10-01 13:43:21.706898] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.091 [2024-10-01 13:43:21.706904] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.706909] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4840) on tqpair=0x1370750 00:15:30.091 [2024-10-01 13:43:21.706919] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:30.091 [2024-10-01 13:43:21.706925] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:30.091 [2024-10-01 13:43:21.706930] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:30.091 [2024-10-01 13:43:21.706937] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:30.091 [2024-10-01 13:43:21.706945] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:30.091 [2024-10-01 13:43:21.706954] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:30.091 [2024-10-01 13:43:21.706968] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:30.091 [2024-10-01 13:43:21.706989] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.706996] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707000] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1370750) 00:15:30.091 [2024-10-01 13:43:21.707009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:30.091 [2024-10-01 13:43:21.707034] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4840, cid 0, qid 0 00:15:30.091 [2024-10-01 13:43:21.707091] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.091 [2024-10-01 13:43:21.707098] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.091 [2024-10-01 13:43:21.707102] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707107] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4840) on tqpair=0x1370750 00:15:30.091 [2024-10-01 13:43:21.707116] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707120] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707125] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1370750) 00:15:30.091 [2024-10-01 13:43:21.707132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.091 [2024-10-01 13:43:21.707139] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707144] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707148] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1370750) 00:15:30.091 [2024-10-01 13:43:21.707154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.091 [2024-10-01 13:43:21.707162] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707166] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707170] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1370750) 00:15:30.091 [2024-10-01 13:43:21.707176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.091 [2024-10-01 13:43:21.707183] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707188] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707192] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1370750) 00:15:30.091 [2024-10-01 13:43:21.707198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.091 [2024-10-01 13:43:21.707204] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:30.091 [2024-10-01 13:43:21.707217] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:30.091 [2024-10-01 13:43:21.707226] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707231] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1370750) 00:15:30.091 [2024-10-01 13:43:21.707239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.091 [2024-10-01 13:43:21.707262] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4840, cid 0, qid 0 00:15:30.091 [2024-10-01 13:43:21.707270] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d49c0, cid 1, qid 0 00:15:30.091 [2024-10-01 13:43:21.707275] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4b40, cid 2, qid 0 00:15:30.091 [2024-10-01 13:43:21.707281] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4cc0, cid 3, qid 0 00:15:30.091 [2024-10-01 13:43:21.707289] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4e40, cid 4, qid 0 00:15:30.091 [2024-10-01 13:43:21.707378] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.091 [2024-10-01 13:43:21.707388] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.091 [2024-10-01 13:43:21.707392] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707397] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4e40) on tqpair=0x1370750 00:15:30.091 [2024-10-01 13:43:21.707403] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:30.091 [2024-10-01 13:43:21.707409] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:30.091 [2024-10-01 13:43:21.707422] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707427] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1370750) 00:15:30.091 [2024-10-01 13:43:21.707436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.091 [2024-10-01 13:43:21.707459] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4e40, cid 4, qid 0 00:15:30.091 [2024-10-01 13:43:21.707519] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:30.091 [2024-10-01 13:43:21.707527] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:30.091 [2024-10-01 13:43:21.707531] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707549] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1370750): datao=0, datal=4096, cccid=4 00:15:30.091 [2024-10-01 13:43:21.707555] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13d4e40) on tqpair(0x1370750): expected_datao=0, payload_size=4096 00:15:30.091 [2024-10-01 13:43:21.707560] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707569] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707573] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707582] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.091 [2024-10-01 13:43:21.707589] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.091 [2024-10-01 13:43:21.707593] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707598] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4e40) on tqpair=0x1370750 00:15:30.091 [2024-10-01 13:43:21.707613] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:30.091 [2024-10-01 13:43:21.707648] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707655] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1370750) 00:15:30.091 [2024-10-01 13:43:21.707663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.091 [2024-10-01 13:43:21.707671] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.091 [2024-10-01 13:43:21.707676] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.707680] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1370750) 00:15:30.092 [2024-10-01 13:43:21.707687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.092 [2024-10-01 13:43:21.707715] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4e40, cid 4, qid 0 00:15:30.092 [2024-10-01 13:43:21.707723] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4fc0, cid 5, qid 0 00:15:30.092 [2024-10-01 13:43:21.707816] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:30.092 [2024-10-01 13:43:21.707824] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:30.092 [2024-10-01 13:43:21.707828] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.707832] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1370750): datao=0, datal=1024, cccid=4 00:15:30.092 [2024-10-01 13:43:21.707848] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13d4e40) on tqpair(0x1370750): expected_datao=0, payload_size=1024 00:15:30.092 [2024-10-01 13:43:21.707857] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.707865] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.707869] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.707876] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.092 [2024-10-01 13:43:21.707882] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.092 [2024-10-01 13:43:21.707886] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.707891] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4fc0) on tqpair=0x1370750 00:15:30.092 [2024-10-01 13:43:21.707912] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.092 [2024-10-01 13:43:21.707921] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.092 [2024-10-01 13:43:21.707925] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.707929] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4e40) on tqpair=0x1370750 00:15:30.092 [2024-10-01 13:43:21.707943] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.707948] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1370750) 00:15:30.092 [2024-10-01 13:43:21.707956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.092 [2024-10-01 13:43:21.707983] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4e40, cid 4, qid 0 00:15:30.092 [2024-10-01 13:43:21.708057] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:30.092 [2024-10-01 13:43:21.708064] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:30.092 [2024-10-01 13:43:21.708068] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.708072] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1370750): datao=0, datal=3072, cccid=4 00:15:30.092 [2024-10-01 13:43:21.708077] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13d4e40) on tqpair(0x1370750): expected_datao=0, payload_size=3072 00:15:30.092 [2024-10-01 13:43:21.708083] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.708090] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.708095] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.708104] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.092 [2024-10-01 13:43:21.708110] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.092 [2024-10-01 13:43:21.708114] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.708118] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4e40) on tqpair=0x1370750 00:15:30.092 [2024-10-01 13:43:21.708129] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.708134] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1370750) 00:15:30.092 [2024-10-01 13:43:21.708142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.092 [2024-10-01 13:43:21.708168] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4e40, cid 4, qid 0 00:15:30.092 [2024-10-01 13:43:21.708229] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:30.092 [2024-10-01 13:43:21.708239] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:30.092 [2024-10-01 13:43:21.708243] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.708247] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1370750): datao=0, datal=8, cccid=4 00:15:30.092 [2024-10-01 13:43:21.708253] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13d4e40) on tqpair(0x1370750): expected_datao=0, payload_size=8 00:15:30.092 [2024-10-01 13:43:21.708258] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.708265] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.708269] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:30.092 [2024-10-01 13:43:21.708289] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.092 [2024-10-01 13:43:21.708297] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.092 [2024-10-01 13:43:21.708302] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.092 ===================================================== 00:15:30.092 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:30.092 ===================================================== 00:15:30.092 Controller Capabilities/Features 00:15:30.092 ================================ 00:15:30.092 Vendor ID: 0000 00:15:30.092 Subsystem Vendor ID: 0000 00:15:30.092 Serial Number: .................... 00:15:30.092 Model Number: ........................................ 00:15:30.092 Firmware Version: 25.01 00:15:30.092 Recommended Arb Burst: 0 00:15:30.092 IEEE OUI Identifier: 00 00 00 00:15:30.092 Multi-path I/O 00:15:30.092 May have multiple subsystem ports: No 00:15:30.092 May have multiple controllers: No 00:15:30.092 Associated with SR-IOV VF: No 00:15:30.092 Max Data Transfer Size: 131072 00:15:30.092 Max Number of Namespaces: 0 00:15:30.092 Max Number of I/O Queues: 1024 00:15:30.092 NVMe Specification Version (VS): 1.3 00:15:30.092 NVMe Specification Version (Identify): 1.3 00:15:30.092 Maximum Queue Entries: 128 00:15:30.092 Contiguous Queues Required: Yes 00:15:30.092 Arbitration Mechanisms Supported 00:15:30.092 Weighted Round Robin: Not Supported 00:15:30.092 Vendor Specific: Not Supported 00:15:30.092 Reset Timeout: 15000 ms 00:15:30.092 Doorbell Stride: 4 bytes 00:15:30.092 NVM Subsystem Reset: Not Supported 00:15:30.092 Command Sets Supported 00:15:30.092 NVM Command Set: Supported 00:15:30.092 Boot Partition: Not Supported 00:15:30.092 Memory Page Size Minimum: 4096 bytes 00:15:30.092 Memory Page Size Maximum: 4096 bytes 00:15:30.092 Persistent Memory Region: Not Supported 00:15:30.092 Optional Asynchronous Events Supported 00:15:30.092 Namespace Attribute Notices: Not Supported 00:15:30.092 Firmware Activation Notices: Not Supported 00:15:30.092 ANA Change Notices: Not Supported 00:15:30.092 PLE Aggregate Log Change Notices: Not Supported 00:15:30.092 LBA Status Info Alert Notices: Not Supported 00:15:30.092 EGE Aggregate Log Change Notices: Not Supported 00:15:30.092 Normal NVM Subsystem Shutdown event: Not Supported 00:15:30.092 Zone Descriptor Change Notices: Not Supported 00:15:30.092 Discovery Log Change Notices: Supported 00:15:30.092 Controller Attributes 00:15:30.092 128-bit Host Identifier: Not Supported 00:15:30.092 Non-Operational Permissive Mode: Not Supported 00:15:30.092 NVM Sets: Not Supported 00:15:30.092 Read Recovery Levels: Not Supported 00:15:30.092 Endurance Groups: Not Supported 00:15:30.092 Predictable Latency Mode: Not Supported 00:15:30.092 Traffic Based Keep ALive: Not Supported 00:15:30.092 Namespace Granularity: Not Supported 00:15:30.092 SQ Associations: Not Supported 00:15:30.092 UUID List: Not Supported 00:15:30.092 Multi-Domain Subsystem: Not Supported 00:15:30.092 Fixed Capacity Management: Not Supported 00:15:30.092 Variable Capacity Management: Not Supported 00:15:30.092 Delete Endurance Group: Not Supported 00:15:30.092 Delete NVM Set: Not Supported 00:15:30.092 Extended LBA Formats Supported: Not Supported 00:15:30.092 Flexible Data Placement Supported: Not Supported 00:15:30.092 00:15:30.092 Controller Memory Buffer Support 00:15:30.092 ================================ 00:15:30.092 Supported: No 00:15:30.092 00:15:30.092 Persistent Memory Region Support 00:15:30.092 ================================ 00:15:30.092 Supported: No 00:15:30.092 00:15:30.092 Admin Command Set Attributes 00:15:30.092 ============================ 00:15:30.093 Security Send/Receive: Not Supported 00:15:30.093 Format NVM: Not Supported 00:15:30.093 Firmware Activate/Download: Not Supported 00:15:30.093 Namespace Management: Not Supported 00:15:30.093 Device Self-Test: Not Supported 00:15:30.093 Directives: Not Supported 00:15:30.093 NVMe-MI: Not Supported 00:15:30.093 Virtualization Management: Not Supported 00:15:30.093 Doorbell Buffer Config: Not Supported 00:15:30.093 Get LBA Status Capability: Not Supported 00:15:30.093 Command & Feature Lockdown Capability: Not Supported 00:15:30.093 Abort Command Limit: 1 00:15:30.093 Async Event Request Limit: 4 00:15:30.093 Number of Firmware Slots: N/A 00:15:30.093 Firmware Slot 1 Read-Only: N/A 00:15:30.093 Firm[2024-10-01 13:43:21.708309] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4e40) on tqpair=0x1370750 00:15:30.093 ware Activation Without Reset: N/A 00:15:30.093 Multiple Update Detection Support: N/A 00:15:30.093 Firmware Update Granularity: No Information Provided 00:15:30.093 Per-Namespace SMART Log: No 00:15:30.093 Asymmetric Namespace Access Log Page: Not Supported 00:15:30.093 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:30.093 Command Effects Log Page: Not Supported 00:15:30.093 Get Log Page Extended Data: Supported 00:15:30.093 Telemetry Log Pages: Not Supported 00:15:30.093 Persistent Event Log Pages: Not Supported 00:15:30.093 Supported Log Pages Log Page: May Support 00:15:30.093 Commands Supported & Effects Log Page: Not Supported 00:15:30.093 Feature Identifiers & Effects Log Page:May Support 00:15:30.093 NVMe-MI Commands & Effects Log Page: May Support 00:15:30.093 Data Area 4 for Telemetry Log: Not Supported 00:15:30.093 Error Log Page Entries Supported: 128 00:15:30.093 Keep Alive: Not Supported 00:15:30.093 00:15:30.093 NVM Command Set Attributes 00:15:30.093 ========================== 00:15:30.093 Submission Queue Entry Size 00:15:30.093 Max: 1 00:15:30.093 Min: 1 00:15:30.093 Completion Queue Entry Size 00:15:30.093 Max: 1 00:15:30.093 Min: 1 00:15:30.093 Number of Namespaces: 0 00:15:30.093 Compare Command: Not Supported 00:15:30.093 Write Uncorrectable Command: Not Supported 00:15:30.093 Dataset Management Command: Not Supported 00:15:30.093 Write Zeroes Command: Not Supported 00:15:30.093 Set Features Save Field: Not Supported 00:15:30.093 Reservations: Not Supported 00:15:30.093 Timestamp: Not Supported 00:15:30.093 Copy: Not Supported 00:15:30.093 Volatile Write Cache: Not Present 00:15:30.093 Atomic Write Unit (Normal): 1 00:15:30.093 Atomic Write Unit (PFail): 1 00:15:30.093 Atomic Compare & Write Unit: 1 00:15:30.093 Fused Compare & Write: Supported 00:15:30.093 Scatter-Gather List 00:15:30.093 SGL Command Set: Supported 00:15:30.093 SGL Keyed: Supported 00:15:30.093 SGL Bit Bucket Descriptor: Not Supported 00:15:30.093 SGL Metadata Pointer: Not Supported 00:15:30.093 Oversized SGL: Not Supported 00:15:30.093 SGL Metadata Address: Not Supported 00:15:30.093 SGL Offset: Supported 00:15:30.093 Transport SGL Data Block: Not Supported 00:15:30.093 Replay Protected Memory Block: Not Supported 00:15:30.093 00:15:30.093 Firmware Slot Information 00:15:30.093 ========================= 00:15:30.093 Active slot: 0 00:15:30.093 00:15:30.093 00:15:30.093 Error Log 00:15:30.093 ========= 00:15:30.093 00:15:30.093 Active Namespaces 00:15:30.093 ================= 00:15:30.093 Discovery Log Page 00:15:30.093 ================== 00:15:30.093 Generation Counter: 2 00:15:30.093 Number of Records: 2 00:15:30.093 Record Format: 0 00:15:30.093 00:15:30.093 Discovery Log Entry 0 00:15:30.093 ---------------------- 00:15:30.093 Transport Type: 3 (TCP) 00:15:30.093 Address Family: 1 (IPv4) 00:15:30.093 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:30.093 Entry Flags: 00:15:30.093 Duplicate Returned Information: 1 00:15:30.093 Explicit Persistent Connection Support for Discovery: 1 00:15:30.093 Transport Requirements: 00:15:30.093 Secure Channel: Not Required 00:15:30.093 Port ID: 0 (0x0000) 00:15:30.093 Controller ID: 65535 (0xffff) 00:15:30.093 Admin Max SQ Size: 128 00:15:30.093 Transport Service Identifier: 4420 00:15:30.093 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:30.093 Transport Address: 10.0.0.3 00:15:30.093 Discovery Log Entry 1 00:15:30.093 ---------------------- 00:15:30.093 Transport Type: 3 (TCP) 00:15:30.093 Address Family: 1 (IPv4) 00:15:30.093 Subsystem Type: 2 (NVM Subsystem) 00:15:30.093 Entry Flags: 00:15:30.093 Duplicate Returned Information: 0 00:15:30.093 Explicit Persistent Connection Support for Discovery: 0 00:15:30.093 Transport Requirements: 00:15:30.093 Secure Channel: Not Required 00:15:30.093 Port ID: 0 (0x0000) 00:15:30.093 Controller ID: 65535 (0xffff) 00:15:30.093 Admin Max SQ Size: 128 00:15:30.093 Transport Service Identifier: 4420 00:15:30.093 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:30.093 Transport Address: 10.0.0.3 [2024-10-01 13:43:21.708433] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:30.093 [2024-10-01 13:43:21.708456] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4840) on tqpair=0x1370750 00:15:30.093 [2024-10-01 13:43:21.708465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.093 [2024-10-01 13:43:21.708472] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d49c0) on tqpair=0x1370750 00:15:30.093 [2024-10-01 13:43:21.708477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.093 [2024-10-01 13:43:21.708482] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4b40) on tqpair=0x1370750 00:15:30.093 [2024-10-01 13:43:21.708487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.093 [2024-10-01 13:43:21.708493] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4cc0) on tqpair=0x1370750 00:15:30.093 [2024-10-01 13:43:21.708498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.093 [2024-10-01 13:43:21.708508] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.093 [2024-10-01 13:43:21.708513] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.093 [2024-10-01 13:43:21.708517] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1370750) 00:15:30.093 [2024-10-01 13:43:21.708526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.093 [2024-10-01 13:43:21.708581] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4cc0, cid 3, qid 0 00:15:30.093 [2024-10-01 13:43:21.708630] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.093 [2024-10-01 13:43:21.708638] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.093 [2024-10-01 13:43:21.708642] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.093 [2024-10-01 13:43:21.708647] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4cc0) on tqpair=0x1370750 00:15:30.093 [2024-10-01 13:43:21.708656] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.093 [2024-10-01 13:43:21.708660] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.093 [2024-10-01 13:43:21.708665] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1370750) 00:15:30.093 [2024-10-01 13:43:21.708673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.093 [2024-10-01 13:43:21.708697] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4cc0, cid 3, qid 0 00:15:30.093 [2024-10-01 13:43:21.708768] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.093 [2024-10-01 13:43:21.708798] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.093 [2024-10-01 13:43:21.708804] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.093 [2024-10-01 13:43:21.708809] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4cc0) on tqpair=0x1370750 00:15:30.093 [2024-10-01 13:43:21.708816] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:30.093 [2024-10-01 13:43:21.708821] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:30.093 [2024-10-01 13:43:21.708835] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.093 [2024-10-01 13:43:21.708840] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.093 [2024-10-01 13:43:21.708844] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1370750) 00:15:30.093 [2024-10-01 13:43:21.708852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.093 [2024-10-01 13:43:21.708876] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4cc0, cid 3, qid 0 00:15:30.093 [2024-10-01 13:43:21.708927] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.093 [2024-10-01 13:43:21.708934] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.093 [2024-10-01 13:43:21.708938] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.093 [2024-10-01 13:43:21.708943] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4cc0) on tqpair=0x1370750 00:15:30.093 [2024-10-01 13:43:21.708955] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.093 [2024-10-01 13:43:21.708960] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.093 [2024-10-01 13:43:21.708964] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1370750) 00:15:30.093 [2024-10-01 13:43:21.708972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.094 [2024-10-01 13:43:21.708991] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4cc0, cid 3, qid 0 00:15:30.094 [2024-10-01 13:43:21.709034] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.094 [2024-10-01 13:43:21.709047] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.094 [2024-10-01 13:43:21.709051] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.709056] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4cc0) on tqpair=0x1370750 00:15:30.094 [2024-10-01 13:43:21.709068] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.709073] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.709077] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1370750) 00:15:30.094 [2024-10-01 13:43:21.709085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.094 [2024-10-01 13:43:21.709104] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4cc0, cid 3, qid 0 00:15:30.094 [2024-10-01 13:43:21.709151] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.094 [2024-10-01 13:43:21.709158] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.094 [2024-10-01 13:43:21.709162] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.709167] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4cc0) on tqpair=0x1370750 00:15:30.094 [2024-10-01 13:43:21.709177] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.709182] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.709186] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1370750) 00:15:30.094 [2024-10-01 13:43:21.709194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.094 [2024-10-01 13:43:21.709213] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4cc0, cid 3, qid 0 00:15:30.094 [2024-10-01 13:43:21.709256] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.094 [2024-10-01 13:43:21.709263] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.094 [2024-10-01 13:43:21.709267] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.709271] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4cc0) on tqpair=0x1370750 00:15:30.094 [2024-10-01 13:43:21.709282] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.709287] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.709291] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1370750) 00:15:30.094 [2024-10-01 13:43:21.709299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.094 [2024-10-01 13:43:21.709317] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4cc0, cid 3, qid 0 00:15:30.094 [2024-10-01 13:43:21.709360] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.094 [2024-10-01 13:43:21.709368] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.094 [2024-10-01 13:43:21.709372] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.709376] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4cc0) on tqpair=0x1370750 00:15:30.094 [2024-10-01 13:43:21.709387] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.709392] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.709396] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1370750) 00:15:30.094 [2024-10-01 13:43:21.709404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.094 [2024-10-01 13:43:21.709422] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4cc0, cid 3, qid 0 00:15:30.094 [2024-10-01 13:43:21.709471] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.094 [2024-10-01 13:43:21.709478] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.094 [2024-10-01 13:43:21.709482] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.709486] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4cc0) on tqpair=0x1370750 00:15:30.094 [2024-10-01 13:43:21.709497] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.709502] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.709506] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1370750) 00:15:30.094 [2024-10-01 13:43:21.709514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.094 [2024-10-01 13:43:21.709532] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4cc0, cid 3, qid 0 00:15:30.094 [2024-10-01 13:43:21.713574] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.094 [2024-10-01 13:43:21.713586] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.094 [2024-10-01 13:43:21.713590] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.713595] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4cc0) on tqpair=0x1370750 00:15:30.094 [2024-10-01 13:43:21.713611] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.713617] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.713621] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1370750) 00:15:30.094 [2024-10-01 13:43:21.713631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.094 [2024-10-01 13:43:21.713660] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d4cc0, cid 3, qid 0 00:15:30.094 [2024-10-01 13:43:21.713712] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.094 [2024-10-01 13:43:21.713719] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.094 [2024-10-01 13:43:21.713723] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.713728] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13d4cc0) on tqpair=0x1370750 00:15:30.094 [2024-10-01 13:43:21.713737] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:15:30.094 00:15:30.094 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:30.094 [2024-10-01 13:43:21.754951] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:15:30.094 [2024-10-01 13:43:21.755025] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74283 ] 00:15:30.094 [2024-10-01 13:43:21.896280] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:30.094 [2024-10-01 13:43:21.896349] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:30.094 [2024-10-01 13:43:21.896357] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:30.094 [2024-10-01 13:43:21.896379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:30.094 [2024-10-01 13:43:21.896390] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:30.094 [2024-10-01 13:43:21.900737] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:30.094 [2024-10-01 13:43:21.900818] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfd1750 0 00:15:30.094 [2024-10-01 13:43:21.908567] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:30.094 [2024-10-01 13:43:21.908604] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:30.094 [2024-10-01 13:43:21.908613] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:30.094 [2024-10-01 13:43:21.908617] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:30.094 [2024-10-01 13:43:21.908661] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.908670] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.908675] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1750) 00:15:30.094 [2024-10-01 13:43:21.908690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:30.094 [2024-10-01 13:43:21.908728] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035840, cid 0, qid 0 00:15:30.094 [2024-10-01 13:43:21.916562] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.094 [2024-10-01 13:43:21.916588] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.094 [2024-10-01 13:43:21.916594] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.916600] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035840) on tqpair=0xfd1750 00:15:30.094 [2024-10-01 13:43:21.916611] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:30.094 [2024-10-01 13:43:21.916621] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:30.094 [2024-10-01 13:43:21.916628] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:30.094 [2024-10-01 13:43:21.916647] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.916653] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.094 [2024-10-01 13:43:21.916658] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1750) 00:15:30.094 [2024-10-01 13:43:21.916669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.094 [2024-10-01 13:43:21.916702] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035840, cid 0, qid 0 00:15:30.094 [2024-10-01 13:43:21.916776] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.094 [2024-10-01 13:43:21.916787] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.095 [2024-10-01 13:43:21.916791] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.916796] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035840) on tqpair=0xfd1750 00:15:30.095 [2024-10-01 13:43:21.916802] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:30.095 [2024-10-01 13:43:21.916813] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:30.095 [2024-10-01 13:43:21.916826] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.916835] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.916842] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1750) 00:15:30.095 [2024-10-01 13:43:21.916854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.095 [2024-10-01 13:43:21.916885] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035840, cid 0, qid 0 00:15:30.095 [2024-10-01 13:43:21.916936] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.095 [2024-10-01 13:43:21.916945] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.095 [2024-10-01 13:43:21.916949] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.916954] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035840) on tqpair=0xfd1750 00:15:30.095 [2024-10-01 13:43:21.916961] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:30.095 [2024-10-01 13:43:21.916975] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:30.095 [2024-10-01 13:43:21.916995] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.917004] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.917011] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1750) 00:15:30.095 [2024-10-01 13:43:21.917022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.095 [2024-10-01 13:43:21.917047] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035840, cid 0, qid 0 00:15:30.095 [2024-10-01 13:43:21.917095] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.095 [2024-10-01 13:43:21.917105] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.095 [2024-10-01 13:43:21.917110] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.917117] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035840) on tqpair=0xfd1750 00:15:30.095 [2024-10-01 13:43:21.917127] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:30.095 [2024-10-01 13:43:21.917139] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.917147] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.917153] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1750) 00:15:30.095 [2024-10-01 13:43:21.917167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.095 [2024-10-01 13:43:21.917199] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035840, cid 0, qid 0 00:15:30.095 [2024-10-01 13:43:21.917245] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.095 [2024-10-01 13:43:21.917257] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.095 [2024-10-01 13:43:21.917265] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.917272] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035840) on tqpair=0xfd1750 00:15:30.095 [2024-10-01 13:43:21.917281] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:30.095 [2024-10-01 13:43:21.917291] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:30.095 [2024-10-01 13:43:21.917306] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:30.095 [2024-10-01 13:43:21.917416] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:30.095 [2024-10-01 13:43:21.917437] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:30.095 [2024-10-01 13:43:21.917456] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.917464] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.917469] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1750) 00:15:30.095 [2024-10-01 13:43:21.917478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.095 [2024-10-01 13:43:21.917507] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035840, cid 0, qid 0 00:15:30.095 [2024-10-01 13:43:21.917577] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.095 [2024-10-01 13:43:21.917595] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.095 [2024-10-01 13:43:21.917602] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.917615] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035840) on tqpair=0xfd1750 00:15:30.095 [2024-10-01 13:43:21.917621] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:30.095 [2024-10-01 13:43:21.917635] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.917641] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.917648] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1750) 00:15:30.095 [2024-10-01 13:43:21.917660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.095 [2024-10-01 13:43:21.917696] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035840, cid 0, qid 0 00:15:30.095 [2024-10-01 13:43:21.917743] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.095 [2024-10-01 13:43:21.917757] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.095 [2024-10-01 13:43:21.917764] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.917769] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035840) on tqpair=0xfd1750 00:15:30.095 [2024-10-01 13:43:21.917775] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:30.095 [2024-10-01 13:43:21.917780] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:30.095 [2024-10-01 13:43:21.917790] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:30.095 [2024-10-01 13:43:21.917808] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:30.095 [2024-10-01 13:43:21.917821] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.917828] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1750) 00:15:30.095 [2024-10-01 13:43:21.917841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.095 [2024-10-01 13:43:21.917875] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035840, cid 0, qid 0 00:15:30.095 [2024-10-01 13:43:21.917975] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:30.095 [2024-10-01 13:43:21.917991] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:30.095 [2024-10-01 13:43:21.917996] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.918001] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd1750): datao=0, datal=4096, cccid=0 00:15:30.095 [2024-10-01 13:43:21.918008] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1035840) on tqpair(0xfd1750): expected_datao=0, payload_size=4096 00:15:30.095 [2024-10-01 13:43:21.918016] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.918030] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.918037] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.918052] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.095 [2024-10-01 13:43:21.918062] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.095 [2024-10-01 13:43:21.918066] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.918071] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035840) on tqpair=0xfd1750 00:15:30.095 [2024-10-01 13:43:21.918081] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:30.095 [2024-10-01 13:43:21.918088] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:30.095 [2024-10-01 13:43:21.918095] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:30.095 [2024-10-01 13:43:21.918104] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:30.095 [2024-10-01 13:43:21.918112] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:30.095 [2024-10-01 13:43:21.918121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:30.095 [2024-10-01 13:43:21.918135] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:30.095 [2024-10-01 13:43:21.918151] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.918157] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.095 [2024-10-01 13:43:21.918161] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1750) 00:15:30.095 [2024-10-01 13:43:21.918174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:30.095 [2024-10-01 13:43:21.918207] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035840, cid 0, qid 0 00:15:30.096 [2024-10-01 13:43:21.918273] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.096 [2024-10-01 13:43:21.918290] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.096 [2024-10-01 13:43:21.918295] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.918300] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035840) on tqpair=0xfd1750 00:15:30.096 [2024-10-01 13:43:21.918309] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.918314] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.918318] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1750) 00:15:30.096 [2024-10-01 13:43:21.918326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.096 [2024-10-01 13:43:21.918333] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.918338] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.918342] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfd1750) 00:15:30.096 [2024-10-01 13:43:21.918348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.096 [2024-10-01 13:43:21.918355] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.918360] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.918364] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfd1750) 00:15:30.096 [2024-10-01 13:43:21.918370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.096 [2024-10-01 13:43:21.918380] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.918388] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.918394] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.096 [2024-10-01 13:43:21.918403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.096 [2024-10-01 13:43:21.918411] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:30.096 [2024-10-01 13:43:21.918430] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:30.096 [2024-10-01 13:43:21.918439] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.918444] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd1750) 00:15:30.096 [2024-10-01 13:43:21.918455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.096 [2024-10-01 13:43:21.918489] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035840, cid 0, qid 0 00:15:30.096 [2024-10-01 13:43:21.918502] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10359c0, cid 1, qid 0 00:15:30.096 [2024-10-01 13:43:21.918511] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035b40, cid 2, qid 0 00:15:30.096 [2024-10-01 13:43:21.918520] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.096 [2024-10-01 13:43:21.918529] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035e40, cid 4, qid 0 00:15:30.096 [2024-10-01 13:43:21.918618] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.096 [2024-10-01 13:43:21.918635] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.096 [2024-10-01 13:43:21.918640] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.918645] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035e40) on tqpair=0xfd1750 00:15:30.096 [2024-10-01 13:43:21.918651] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:30.096 [2024-10-01 13:43:21.918658] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:30.096 [2024-10-01 13:43:21.918673] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:30.096 [2024-10-01 13:43:21.918683] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:30.096 [2024-10-01 13:43:21.918695] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.918700] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.918704] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd1750) 00:15:30.096 [2024-10-01 13:43:21.918713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:30.096 [2024-10-01 13:43:21.918744] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035e40, cid 4, qid 0 00:15:30.096 [2024-10-01 13:43:21.918794] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.096 [2024-10-01 13:43:21.918807] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.096 [2024-10-01 13:43:21.918814] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.918821] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035e40) on tqpair=0xfd1750 00:15:30.096 [2024-10-01 13:43:21.918900] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:30.096 [2024-10-01 13:43:21.918928] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:30.096 [2024-10-01 13:43:21.918939] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.918945] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd1750) 00:15:30.096 [2024-10-01 13:43:21.918957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.096 [2024-10-01 13:43:21.918990] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035e40, cid 4, qid 0 00:15:30.096 [2024-10-01 13:43:21.919052] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:30.096 [2024-10-01 13:43:21.919062] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:30.096 [2024-10-01 13:43:21.919066] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919071] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd1750): datao=0, datal=4096, cccid=4 00:15:30.096 [2024-10-01 13:43:21.919079] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1035e40) on tqpair(0xfd1750): expected_datao=0, payload_size=4096 00:15:30.096 [2024-10-01 13:43:21.919087] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919099] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919108] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919122] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.096 [2024-10-01 13:43:21.919132] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.096 [2024-10-01 13:43:21.919137] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919142] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035e40) on tqpair=0xfd1750 00:15:30.096 [2024-10-01 13:43:21.919162] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:30.096 [2024-10-01 13:43:21.919181] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:30.096 [2024-10-01 13:43:21.919193] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:30.096 [2024-10-01 13:43:21.919203] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919210] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd1750) 00:15:30.096 [2024-10-01 13:43:21.919222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.096 [2024-10-01 13:43:21.919256] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035e40, cid 4, qid 0 00:15:30.096 [2024-10-01 13:43:21.919323] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:30.096 [2024-10-01 13:43:21.919338] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:30.096 [2024-10-01 13:43:21.919342] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919347] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd1750): datao=0, datal=4096, cccid=4 00:15:30.096 [2024-10-01 13:43:21.919352] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1035e40) on tqpair(0xfd1750): expected_datao=0, payload_size=4096 00:15:30.096 [2024-10-01 13:43:21.919357] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919365] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919370] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919379] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.096 [2024-10-01 13:43:21.919386] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.096 [2024-10-01 13:43:21.919390] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919394] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035e40) on tqpair=0xfd1750 00:15:30.096 [2024-10-01 13:43:21.919408] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:30.096 [2024-10-01 13:43:21.919423] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:30.096 [2024-10-01 13:43:21.919438] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919445] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd1750) 00:15:30.096 [2024-10-01 13:43:21.919454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.096 [2024-10-01 13:43:21.919479] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035e40, cid 4, qid 0 00:15:30.096 [2024-10-01 13:43:21.919555] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:30.096 [2024-10-01 13:43:21.919570] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:30.096 [2024-10-01 13:43:21.919575] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919579] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd1750): datao=0, datal=4096, cccid=4 00:15:30.096 [2024-10-01 13:43:21.919585] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1035e40) on tqpair(0xfd1750): expected_datao=0, payload_size=4096 00:15:30.096 [2024-10-01 13:43:21.919590] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919599] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919606] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919620] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.096 [2024-10-01 13:43:21.919629] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.096 [2024-10-01 13:43:21.919633] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.096 [2024-10-01 13:43:21.919638] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035e40) on tqpair=0xfd1750 00:15:30.097 [2024-10-01 13:43:21.919654] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:30.097 [2024-10-01 13:43:21.919666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:30.097 [2024-10-01 13:43:21.919675] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:30.097 [2024-10-01 13:43:21.919683] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:30.097 [2024-10-01 13:43:21.919689] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:30.097 [2024-10-01 13:43:21.919695] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:30.097 [2024-10-01 13:43:21.919700] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:30.097 [2024-10-01 13:43:21.919705] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:30.097 [2024-10-01 13:43:21.919711] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:30.097 [2024-10-01 13:43:21.919730] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.919738] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd1750) 00:15:30.097 [2024-10-01 13:43:21.919751] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.097 [2024-10-01 13:43:21.919764] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.919772] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.919778] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfd1750) 00:15:30.097 [2024-10-01 13:43:21.919787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.097 [2024-10-01 13:43:21.919822] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035e40, cid 4, qid 0 00:15:30.097 [2024-10-01 13:43:21.919836] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035fc0, cid 5, qid 0 00:15:30.097 [2024-10-01 13:43:21.919910] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.097 [2024-10-01 13:43:21.919921] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.097 [2024-10-01 13:43:21.919926] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.919930] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035e40) on tqpair=0xfd1750 00:15:30.097 [2024-10-01 13:43:21.919938] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.097 [2024-10-01 13:43:21.919944] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.097 [2024-10-01 13:43:21.919948] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.919953] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035fc0) on tqpair=0xfd1750 00:15:30.097 [2024-10-01 13:43:21.919972] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.919981] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfd1750) 00:15:30.097 [2024-10-01 13:43:21.919993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.097 [2024-10-01 13:43:21.920020] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035fc0, cid 5, qid 0 00:15:30.097 [2024-10-01 13:43:21.920067] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.097 [2024-10-01 13:43:21.920080] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.097 [2024-10-01 13:43:21.920087] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.920094] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035fc0) on tqpair=0xfd1750 00:15:30.097 [2024-10-01 13:43:21.920111] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.920117] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfd1750) 00:15:30.097 [2024-10-01 13:43:21.920125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.097 [2024-10-01 13:43:21.920152] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035fc0, cid 5, qid 0 00:15:30.097 [2024-10-01 13:43:21.920212] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.097 [2024-10-01 13:43:21.920224] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.097 [2024-10-01 13:43:21.920231] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.920238] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035fc0) on tqpair=0xfd1750 00:15:30.097 [2024-10-01 13:43:21.920256] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.920265] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfd1750) 00:15:30.097 [2024-10-01 13:43:21.920276] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.097 [2024-10-01 13:43:21.920298] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035fc0, cid 5, qid 0 00:15:30.097 [2024-10-01 13:43:21.920343] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.097 [2024-10-01 13:43:21.920354] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.097 [2024-10-01 13:43:21.920361] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.920368] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035fc0) on tqpair=0xfd1750 00:15:30.097 [2024-10-01 13:43:21.920396] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.920409] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfd1750) 00:15:30.097 [2024-10-01 13:43:21.920418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.097 [2024-10-01 13:43:21.920427] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.920431] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd1750) 00:15:30.097 [2024-10-01 13:43:21.920439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.097 [2024-10-01 13:43:21.920447] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.920451] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xfd1750) 00:15:30.097 [2024-10-01 13:43:21.920459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.097 [2024-10-01 13:43:21.920471] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.920477] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfd1750) 00:15:30.097 [2024-10-01 13:43:21.920484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.097 [2024-10-01 13:43:21.920514] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035fc0, cid 5, qid 0 00:15:30.097 [2024-10-01 13:43:21.920528] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035e40, cid 4, qid 0 00:15:30.097 [2024-10-01 13:43:21.924556] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1036140, cid 6, qid 0 00:15:30.097 [2024-10-01 13:43:21.924579] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10362c0, cid 7, qid 0 00:15:30.097 [2024-10-01 13:43:21.924597] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:30.097 [2024-10-01 13:43:21.924605] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:30.097 [2024-10-01 13:43:21.924609] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924613] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd1750): datao=0, datal=8192, cccid=5 00:15:30.097 [2024-10-01 13:43:21.924619] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1035fc0) on tqpair(0xfd1750): expected_datao=0, payload_size=8192 00:15:30.097 [2024-10-01 13:43:21.924624] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924633] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924638] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924644] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:30.097 [2024-10-01 13:43:21.924650] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:30.097 [2024-10-01 13:43:21.924654] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924659] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd1750): datao=0, datal=512, cccid=4 00:15:30.097 [2024-10-01 13:43:21.924664] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1035e40) on tqpair(0xfd1750): expected_datao=0, payload_size=512 00:15:30.097 [2024-10-01 13:43:21.924668] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924675] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924679] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924686] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:30.097 [2024-10-01 13:43:21.924695] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:30.097 [2024-10-01 13:43:21.924702] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924709] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd1750): datao=0, datal=512, cccid=6 00:15:30.097 [2024-10-01 13:43:21.924717] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1036140) on tqpair(0xfd1750): expected_datao=0, payload_size=512 00:15:30.097 [2024-10-01 13:43:21.924726] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924736] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924743] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924749] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:30.097 [2024-10-01 13:43:21.924756] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:30.097 [2024-10-01 13:43:21.924759] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924764] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd1750): datao=0, datal=4096, cccid=7 00:15:30.097 [2024-10-01 13:43:21.924769] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10362c0) on tqpair(0xfd1750): expected_datao=0, payload_size=4096 00:15:30.097 [2024-10-01 13:43:21.924774] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924785] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924793] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924803] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.097 [2024-10-01 13:43:21.924810] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.097 [2024-10-01 13:43:21.924816] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.097 [2024-10-01 13:43:21.924824] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035fc0) on tqpair=0xfd1750 00:15:30.097 [2024-10-01 13:43:21.924846] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.098 [2024-10-01 13:43:21.924854] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.098 [2024-10-01 13:43:21.924859] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.098 [2024-10-01 13:43:21.924866] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035e40) on tqpair=0xfd1750 00:15:30.098 [2024-10-01 13:43:21.924887] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.098 [2024-10-01 13:43:21.924900] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.098 [2024-10-01 13:43:21.924907] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.098 ===================================================== 00:15:30.098 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:30.098 ===================================================== 00:15:30.098 Controller Capabilities/Features 00:15:30.098 ================================ 00:15:30.098 Vendor ID: 8086 00:15:30.098 Subsystem Vendor ID: 8086 00:15:30.098 Serial Number: SPDK00000000000001 00:15:30.098 Model Number: SPDK bdev Controller 00:15:30.098 Firmware Version: 25.01 00:15:30.098 Recommended Arb Burst: 6 00:15:30.098 IEEE OUI Identifier: e4 d2 5c 00:15:30.098 Multi-path I/O 00:15:30.098 May have multiple subsystem ports: Yes 00:15:30.098 May have multiple controllers: Yes 00:15:30.098 Associated with SR-IOV VF: No 00:15:30.098 Max Data Transfer Size: 131072 00:15:30.098 Max Number of Namespaces: 32 00:15:30.098 Max Number of I/O Queues: 127 00:15:30.098 NVMe Specification Version (VS): 1.3 00:15:30.098 NVMe Specification Version (Identify): 1.3 00:15:30.098 Maximum Queue Entries: 128 00:15:30.098 Contiguous Queues Required: Yes 00:15:30.098 Arbitration Mechanisms Supported 00:15:30.098 Weighted Round Robin: Not Supported 00:15:30.098 Vendor Specific: Not Supported 00:15:30.098 Reset Timeout: 15000 ms 00:15:30.098 Doorbell Stride: 4 bytes 00:15:30.098 NVM Subsystem Reset: Not Supported 00:15:30.098 Command Sets Supported 00:15:30.098 NVM Command Set: Supported 00:15:30.098 Boot Partition: Not Supported 00:15:30.098 Memory Page Size Minimum: 4096 bytes 00:15:30.098 Memory Page Size Maximum: 4096 bytes 00:15:30.098 Persistent Memory Region: Not Supported 00:15:30.098 Optional Asynchronous Events Supported 00:15:30.098 Namespace Attribute Notices: Supported 00:15:30.098 Firmware Activation Notices: Not Supported 00:15:30.098 ANA Change Notices: Not Supported 00:15:30.098 PLE Aggregate Log Change Notices: Not Supported 00:15:30.098 LBA Status Info Alert Notices: Not Supported 00:15:30.098 EGE Aggregate Log Change Notices: Not Supported 00:15:30.098 Normal NVM Subsystem Shutdown event: Not Supported 00:15:30.098 Zone Descriptor Change Notices: Not Supported 00:15:30.098 Discovery Log Change Notices: Not Supported 00:15:30.098 Controller Attributes 00:15:30.098 128-bit Host Identifier: Supported 00:15:30.098 Non-Operational Permissive Mode: Not Supported 00:15:30.098 NVM Sets: Not Supported 00:15:30.098 Read Recovery Levels: Not Supported 00:15:30.098 Endurance Groups: Not Supported 00:15:30.098 Predictable Latency Mode: Not Supported 00:15:30.098 Traffic Based Keep ALive: Not Supported 00:15:30.098 Namespace Granularity: Not Supported 00:15:30.098 SQ Associations: Not Supported 00:15:30.098 UUID List: Not Supported 00:15:30.098 Multi-Domain Subsystem: Not Supported 00:15:30.098 Fixed Capacity Management: Not Supported 00:15:30.098 Variable Capacity Management: Not Supported 00:15:30.098 Delete Endurance Group: Not Supported 00:15:30.098 Delete NVM Set: Not Supported 00:15:30.098 Extended LBA Formats Supported: Not Supported 00:15:30.098 Flexible Data Placement Supported: Not Supported 00:15:30.098 00:15:30.098 Controller Memory Buffer Support 00:15:30.098 ================================ 00:15:30.098 Supported: No 00:15:30.098 00:15:30.098 Persistent Memory Region Support 00:15:30.098 ================================ 00:15:30.098 Supported: No 00:15:30.098 00:15:30.098 Admin Command Set Attributes 00:15:30.098 ============================ 00:15:30.098 Security Send/Receive: Not Supported 00:15:30.098 Format NVM: Not Supported 00:15:30.098 Firmware Activate/Download: Not Supported 00:15:30.098 Namespace Management: Not Supported 00:15:30.098 Device Self-Test: Not Supported 00:15:30.098 Directives: Not Supported 00:15:30.098 NVMe-MI: Not Supported 00:15:30.098 Virtualization Management: Not Supported 00:15:30.098 Doorbell Buffer Config: Not Supported 00:15:30.098 Get LBA Status Capability: Not Supported 00:15:30.098 Command & Feature Lockdown Capability: Not Supported 00:15:30.098 Abort Command Limit: 4 00:15:30.098 Async Event Request Limit: 4 00:15:30.098 Number of Firmware Slots: N/A 00:15:30.098 Firmware Slot 1 Read-Only: N/A 00:15:30.098 Firmware Activation Without Reset: [2024-10-01 13:43:21.924912] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1036140) on tqpair=0xfd1750 00:15:30.098 [2024-10-01 13:43:21.924921] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.098 [2024-10-01 13:43:21.924927] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.098 [2024-10-01 13:43:21.924931] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.098 [2024-10-01 13:43:21.924935] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10362c0) on tqpair=0xfd1750 00:15:30.098 N/A 00:15:30.098 Multiple Update Detection Support: N/A 00:15:30.098 Firmware Update Granularity: No Information Provided 00:15:30.098 Per-Namespace SMART Log: No 00:15:30.098 Asymmetric Namespace Access Log Page: Not Supported 00:15:30.098 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:30.098 Command Effects Log Page: Supported 00:15:30.098 Get Log Page Extended Data: Supported 00:15:30.098 Telemetry Log Pages: Not Supported 00:15:30.098 Persistent Event Log Pages: Not Supported 00:15:30.098 Supported Log Pages Log Page: May Support 00:15:30.098 Commands Supported & Effects Log Page: Not Supported 00:15:30.098 Feature Identifiers & Effects Log Page:May Support 00:15:30.098 NVMe-MI Commands & Effects Log Page: May Support 00:15:30.098 Data Area 4 for Telemetry Log: Not Supported 00:15:30.098 Error Log Page Entries Supported: 128 00:15:30.098 Keep Alive: Supported 00:15:30.098 Keep Alive Granularity: 10000 ms 00:15:30.098 00:15:30.098 NVM Command Set Attributes 00:15:30.098 ========================== 00:15:30.098 Submission Queue Entry Size 00:15:30.098 Max: 64 00:15:30.098 Min: 64 00:15:30.098 Completion Queue Entry Size 00:15:30.098 Max: 16 00:15:30.098 Min: 16 00:15:30.098 Number of Namespaces: 32 00:15:30.098 Compare Command: Supported 00:15:30.098 Write Uncorrectable Command: Not Supported 00:15:30.098 Dataset Management Command: Supported 00:15:30.098 Write Zeroes Command: Supported 00:15:30.098 Set Features Save Field: Not Supported 00:15:30.098 Reservations: Supported 00:15:30.098 Timestamp: Not Supported 00:15:30.098 Copy: Supported 00:15:30.098 Volatile Write Cache: Present 00:15:30.098 Atomic Write Unit (Normal): 1 00:15:30.098 Atomic Write Unit (PFail): 1 00:15:30.098 Atomic Compare & Write Unit: 1 00:15:30.098 Fused Compare & Write: Supported 00:15:30.098 Scatter-Gather List 00:15:30.098 SGL Command Set: Supported 00:15:30.098 SGL Keyed: Supported 00:15:30.098 SGL Bit Bucket Descriptor: Not Supported 00:15:30.098 SGL Metadata Pointer: Not Supported 00:15:30.098 Oversized SGL: Not Supported 00:15:30.098 SGL Metadata Address: Not Supported 00:15:30.098 SGL Offset: Supported 00:15:30.098 Transport SGL Data Block: Not Supported 00:15:30.098 Replay Protected Memory Block: Not Supported 00:15:30.098 00:15:30.098 Firmware Slot Information 00:15:30.098 ========================= 00:15:30.098 Active slot: 1 00:15:30.098 Slot 1 Firmware Revision: 25.01 00:15:30.098 00:15:30.098 00:15:30.098 Commands Supported and Effects 00:15:30.098 ============================== 00:15:30.098 Admin Commands 00:15:30.098 -------------- 00:15:30.098 Get Log Page (02h): Supported 00:15:30.098 Identify (06h): Supported 00:15:30.098 Abort (08h): Supported 00:15:30.098 Set Features (09h): Supported 00:15:30.098 Get Features (0Ah): Supported 00:15:30.098 Asynchronous Event Request (0Ch): Supported 00:15:30.098 Keep Alive (18h): Supported 00:15:30.098 I/O Commands 00:15:30.098 ------------ 00:15:30.098 Flush (00h): Supported LBA-Change 00:15:30.098 Write (01h): Supported LBA-Change 00:15:30.098 Read (02h): Supported 00:15:30.098 Compare (05h): Supported 00:15:30.098 Write Zeroes (08h): Supported LBA-Change 00:15:30.098 Dataset Management (09h): Supported LBA-Change 00:15:30.098 Copy (19h): Supported LBA-Change 00:15:30.098 00:15:30.098 Error Log 00:15:30.098 ========= 00:15:30.098 00:15:30.098 Arbitration 00:15:30.098 =========== 00:15:30.098 Arbitration Burst: 1 00:15:30.098 00:15:30.098 Power Management 00:15:30.098 ================ 00:15:30.098 Number of Power States: 1 00:15:30.098 Current Power State: Power State #0 00:15:30.098 Power State #0: 00:15:30.098 Max Power: 0.00 W 00:15:30.098 Non-Operational State: Operational 00:15:30.098 Entry Latency: Not Reported 00:15:30.098 Exit Latency: Not Reported 00:15:30.098 Relative Read Throughput: 0 00:15:30.098 Relative Read Latency: 0 00:15:30.098 Relative Write Throughput: 0 00:15:30.098 Relative Write Latency: 0 00:15:30.099 Idle Power: Not Reported 00:15:30.099 Active Power: Not Reported 00:15:30.099 Non-Operational Permissive Mode: Not Supported 00:15:30.099 00:15:30.099 Health Information 00:15:30.099 ================== 00:15:30.099 Critical Warnings: 00:15:30.099 Available Spare Space: OK 00:15:30.099 Temperature: OK 00:15:30.099 Device Reliability: OK 00:15:30.099 Read Only: No 00:15:30.099 Volatile Memory Backup: OK 00:15:30.099 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:30.099 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:30.099 Available Spare: 0% 00:15:30.099 Available Spare Threshold: 0% 00:15:30.099 Life Percentage Used:[2024-10-01 13:43:21.925071] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.925084] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfd1750) 00:15:30.099 [2024-10-01 13:43:21.925099] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.099 [2024-10-01 13:43:21.925133] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10362c0, cid 7, qid 0 00:15:30.099 [2024-10-01 13:43:21.925191] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.099 [2024-10-01 13:43:21.925205] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.099 [2024-10-01 13:43:21.925209] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.925214] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10362c0) on tqpair=0xfd1750 00:15:30.099 [2024-10-01 13:43:21.925267] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:30.099 [2024-10-01 13:43:21.925283] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035840) on tqpair=0xfd1750 00:15:30.099 [2024-10-01 13:43:21.925293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.099 [2024-10-01 13:43:21.925304] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10359c0) on tqpair=0xfd1750 00:15:30.099 [2024-10-01 13:43:21.925312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.099 [2024-10-01 13:43:21.925322] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035b40) on tqpair=0xfd1750 00:15:30.099 [2024-10-01 13:43:21.925330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.099 [2024-10-01 13:43:21.925338] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.099 [2024-10-01 13:43:21.925344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.099 [2024-10-01 13:43:21.925355] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.925360] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.925364] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.099 [2024-10-01 13:43:21.925376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.099 [2024-10-01 13:43:21.925412] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.099 [2024-10-01 13:43:21.925458] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.099 [2024-10-01 13:43:21.925471] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.099 [2024-10-01 13:43:21.925478] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.925483] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.099 [2024-10-01 13:43:21.925493] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.925500] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.925507] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.099 [2024-10-01 13:43:21.925519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.099 [2024-10-01 13:43:21.925574] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.099 [2024-10-01 13:43:21.925648] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.099 [2024-10-01 13:43:21.925660] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.099 [2024-10-01 13:43:21.925667] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.925672] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.099 [2024-10-01 13:43:21.925678] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:30.099 [2024-10-01 13:43:21.925684] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:30.099 [2024-10-01 13:43:21.925696] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.925704] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.925711] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.099 [2024-10-01 13:43:21.925720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.099 [2024-10-01 13:43:21.925753] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.099 [2024-10-01 13:43:21.925803] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.099 [2024-10-01 13:43:21.925821] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.099 [2024-10-01 13:43:21.925827] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.925834] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.099 [2024-10-01 13:43:21.925852] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.925861] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.925868] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.099 [2024-10-01 13:43:21.925881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.099 [2024-10-01 13:43:21.925914] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.099 [2024-10-01 13:43:21.925957] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.099 [2024-10-01 13:43:21.925966] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.099 [2024-10-01 13:43:21.925970] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.925975] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.099 [2024-10-01 13:43:21.925989] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.925997] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.926003] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.099 [2024-10-01 13:43:21.926013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.099 [2024-10-01 13:43:21.926043] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.099 [2024-10-01 13:43:21.926087] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.099 [2024-10-01 13:43:21.926101] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.099 [2024-10-01 13:43:21.926106] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.926111] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.099 [2024-10-01 13:43:21.926123] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.926128] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.926132] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.099 [2024-10-01 13:43:21.926141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.099 [2024-10-01 13:43:21.926170] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.099 [2024-10-01 13:43:21.926217] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.099 [2024-10-01 13:43:21.926240] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.099 [2024-10-01 13:43:21.926245] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.926250] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.099 [2024-10-01 13:43:21.926264] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.926269] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.926273] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.099 [2024-10-01 13:43:21.926282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.099 [2024-10-01 13:43:21.926312] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.099 [2024-10-01 13:43:21.926358] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.099 [2024-10-01 13:43:21.926372] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.099 [2024-10-01 13:43:21.926379] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.099 [2024-10-01 13:43:21.926386] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.099 [2024-10-01 13:43:21.926400] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.926405] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.926410] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.100 [2024-10-01 13:43:21.926418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.100 [2024-10-01 13:43:21.926441] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.100 [2024-10-01 13:43:21.926490] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.100 [2024-10-01 13:43:21.926503] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.100 [2024-10-01 13:43:21.926509] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.926517] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.100 [2024-10-01 13:43:21.926552] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.926561] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.926565] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.100 [2024-10-01 13:43:21.926574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.100 [2024-10-01 13:43:21.926598] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.100 [2024-10-01 13:43:21.926650] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.100 [2024-10-01 13:43:21.926664] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.100 [2024-10-01 13:43:21.926671] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.926678] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.100 [2024-10-01 13:43:21.926696] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.926702] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.926707] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.100 [2024-10-01 13:43:21.926715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.100 [2024-10-01 13:43:21.926738] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.100 [2024-10-01 13:43:21.926782] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.100 [2024-10-01 13:43:21.926795] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.100 [2024-10-01 13:43:21.926801] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.926806] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.100 [2024-10-01 13:43:21.926825] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.926834] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.926841] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.100 [2024-10-01 13:43:21.926854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.100 [2024-10-01 13:43:21.926880] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.100 [2024-10-01 13:43:21.926929] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.100 [2024-10-01 13:43:21.926940] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.100 [2024-10-01 13:43:21.926948] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.926955] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.100 [2024-10-01 13:43:21.926974] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.926982] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.926986] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.100 [2024-10-01 13:43:21.926994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.100 [2024-10-01 13:43:21.927017] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.100 [2024-10-01 13:43:21.927064] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.100 [2024-10-01 13:43:21.927077] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.100 [2024-10-01 13:43:21.927084] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927092] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.100 [2024-10-01 13:43:21.927110] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927118] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927122] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.100 [2024-10-01 13:43:21.927130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.100 [2024-10-01 13:43:21.927154] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.100 [2024-10-01 13:43:21.927209] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.100 [2024-10-01 13:43:21.927223] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.100 [2024-10-01 13:43:21.927230] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927238] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.100 [2024-10-01 13:43:21.927254] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927260] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927264] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.100 [2024-10-01 13:43:21.927272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.100 [2024-10-01 13:43:21.927295] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.100 [2024-10-01 13:43:21.927347] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.100 [2024-10-01 13:43:21.927360] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.100 [2024-10-01 13:43:21.927368] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927375] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.100 [2024-10-01 13:43:21.927390] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927396] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927400] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.100 [2024-10-01 13:43:21.927408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.100 [2024-10-01 13:43:21.927431] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.100 [2024-10-01 13:43:21.927479] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.100 [2024-10-01 13:43:21.927492] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.100 [2024-10-01 13:43:21.927500] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927507] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.100 [2024-10-01 13:43:21.927526] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927548] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927554] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.100 [2024-10-01 13:43:21.927563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.100 [2024-10-01 13:43:21.927593] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.100 [2024-10-01 13:43:21.927644] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.100 [2024-10-01 13:43:21.927665] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.100 [2024-10-01 13:43:21.927671] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927676] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.100 [2024-10-01 13:43:21.927690] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927695] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927699] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.100 [2024-10-01 13:43:21.927708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.100 [2024-10-01 13:43:21.927735] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.100 [2024-10-01 13:43:21.927785] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.100 [2024-10-01 13:43:21.927798] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.100 [2024-10-01 13:43:21.927805] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927810] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.100 [2024-10-01 13:43:21.927823] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927828] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927832] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.100 [2024-10-01 13:43:21.927853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.100 [2024-10-01 13:43:21.927880] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.100 [2024-10-01 13:43:21.927931] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.100 [2024-10-01 13:43:21.927952] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.100 [2024-10-01 13:43:21.927961] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927969] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.100 [2024-10-01 13:43:21.927983] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927989] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.927993] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.100 [2024-10-01 13:43:21.928001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.100 [2024-10-01 13:43:21.928024] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.100 [2024-10-01 13:43:21.928075] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.100 [2024-10-01 13:43:21.928088] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.100 [2024-10-01 13:43:21.928093] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.928098] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.100 [2024-10-01 13:43:21.928110] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.100 [2024-10-01 13:43:21.928116] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.101 [2024-10-01 13:43:21.928120] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.101 [2024-10-01 13:43:21.928128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.101 [2024-10-01 13:43:21.928157] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.101 [2024-10-01 13:43:21.928205] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.101 [2024-10-01 13:43:21.928219] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.101 [2024-10-01 13:43:21.928226] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.101 [2024-10-01 13:43:21.928234] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.101 [2024-10-01 13:43:21.928250] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.101 [2024-10-01 13:43:21.928256] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.101 [2024-10-01 13:43:21.928260] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.101 [2024-10-01 13:43:21.928268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.101 [2024-10-01 13:43:21.928295] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.101 [2024-10-01 13:43:21.928344] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.101 [2024-10-01 13:43:21.928358] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.101 [2024-10-01 13:43:21.928365] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.101 [2024-10-01 13:43:21.928370] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.101 [2024-10-01 13:43:21.928383] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.101 [2024-10-01 13:43:21.928388] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.101 [2024-10-01 13:43:21.928392] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.101 [2024-10-01 13:43:21.928400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.101 [2024-10-01 13:43:21.928423] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.101 [2024-10-01 13:43:21.928470] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.101 [2024-10-01 13:43:21.928485] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.101 [2024-10-01 13:43:21.928492] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.101 [2024-10-01 13:43:21.928499] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.101 [2024-10-01 13:43:21.928512] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:30.101 [2024-10-01 13:43:21.928517] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:30.101 [2024-10-01 13:43:21.928521] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1750) 00:15:30.101 [2024-10-01 13:43:21.928530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.101 [2024-10-01 13:43:21.932588] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1035cc0, cid 3, qid 0 00:15:30.101 [2024-10-01 13:43:21.932647] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:30.101 [2024-10-01 13:43:21.932658] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:30.101 [2024-10-01 13:43:21.932666] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:30.101 [2024-10-01 13:43:21.932673] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1035cc0) on tqpair=0xfd1750 00:15:30.101 [2024-10-01 13:43:21.932689] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:15:30.358 0% 00:15:30.358 Data Units Read: 0 00:15:30.358 Data Units Written: 0 00:15:30.358 Host Read Commands: 0 00:15:30.358 Host Write Commands: 0 00:15:30.358 Controller Busy Time: 0 minutes 00:15:30.358 Power Cycles: 0 00:15:30.358 Power On Hours: 0 hours 00:15:30.358 Unsafe Shutdowns: 0 00:15:30.358 Unrecoverable Media Errors: 0 00:15:30.358 Lifetime Error Log Entries: 0 00:15:30.358 Warning Temperature Time: 0 minutes 00:15:30.359 Critical Temperature Time: 0 minutes 00:15:30.359 00:15:30.359 Number of Queues 00:15:30.359 ================ 00:15:30.359 Number of I/O Submission Queues: 127 00:15:30.359 Number of I/O Completion Queues: 127 00:15:30.359 00:15:30.359 Active Namespaces 00:15:30.359 ================= 00:15:30.359 Namespace ID:1 00:15:30.359 Error Recovery Timeout: Unlimited 00:15:30.359 Command Set Identifier: NVM (00h) 00:15:30.359 Deallocate: Supported 00:15:30.359 Deallocated/Unwritten Error: Not Supported 00:15:30.359 Deallocated Read Value: Unknown 00:15:30.359 Deallocate in Write Zeroes: Not Supported 00:15:30.359 Deallocated Guard Field: 0xFFFF 00:15:30.359 Flush: Supported 00:15:30.359 Reservation: Supported 00:15:30.359 Namespace Sharing Capabilities: Multiple Controllers 00:15:30.359 Size (in LBAs): 131072 (0GiB) 00:15:30.359 Capacity (in LBAs): 131072 (0GiB) 00:15:30.359 Utilization (in LBAs): 131072 (0GiB) 00:15:30.359 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:30.359 EUI64: ABCDEF0123456789 00:15:30.359 UUID: 9635788c-f3aa-450f-8f26-c65c20d1a9f2 00:15:30.359 Thin Provisioning: Not Supported 00:15:30.359 Per-NS Atomic Units: Yes 00:15:30.359 Atomic Boundary Size (Normal): 0 00:15:30.359 Atomic Boundary Size (PFail): 0 00:15:30.359 Atomic Boundary Offset: 0 00:15:30.359 Maximum Single Source Range Length: 65535 00:15:30.359 Maximum Copy Length: 65535 00:15:30.359 Maximum Source Range Count: 1 00:15:30.359 NGUID/EUI64 Never Reused: No 00:15:30.359 Namespace Write Protected: No 00:15:30.359 Number of LBA Formats: 1 00:15:30.359 Current LBA Format: LBA Format #00 00:15:30.359 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:30.359 00:15:30.359 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:30.359 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.359 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.359 13:43:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:30.359 rmmod nvme_tcp 00:15:30.359 rmmod nvme_fabrics 00:15:30.359 rmmod nvme_keyring 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 74246 ']' 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 74246 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 74246 ']' 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 74246 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74246 00:15:30.359 killing process with pid 74246 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74246' 00:15:30.359 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 74246 00:15:30.360 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 74246 00:15:30.618 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:30.618 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:30.618 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:30.618 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:15:30.618 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:15:30.618 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:30.618 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:15:30.618 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.619 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:15:30.879 00:15:30.879 real 0m2.891s 00:15:30.879 user 0m7.300s 00:15:30.879 sys 0m0.700s 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:30.879 ************************************ 00:15:30.879 END TEST nvmf_identify 00:15:30.879 ************************************ 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.879 ************************************ 00:15:30.879 START TEST nvmf_perf 00:15:30.879 ************************************ 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:30.879 * Looking for test storage... 00:15:30.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:30.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.879 --rc genhtml_branch_coverage=1 00:15:30.879 --rc genhtml_function_coverage=1 00:15:30.879 --rc genhtml_legend=1 00:15:30.879 --rc geninfo_all_blocks=1 00:15:30.879 --rc geninfo_unexecuted_blocks=1 00:15:30.879 00:15:30.879 ' 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:30.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.879 --rc genhtml_branch_coverage=1 00:15:30.879 --rc genhtml_function_coverage=1 00:15:30.879 --rc genhtml_legend=1 00:15:30.879 --rc geninfo_all_blocks=1 00:15:30.879 --rc geninfo_unexecuted_blocks=1 00:15:30.879 00:15:30.879 ' 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:30.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.879 --rc genhtml_branch_coverage=1 00:15:30.879 --rc genhtml_function_coverage=1 00:15:30.879 --rc genhtml_legend=1 00:15:30.879 --rc geninfo_all_blocks=1 00:15:30.879 --rc geninfo_unexecuted_blocks=1 00:15:30.879 00:15:30.879 ' 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:30.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.879 --rc genhtml_branch_coverage=1 00:15:30.879 --rc genhtml_function_coverage=1 00:15:30.879 --rc genhtml_legend=1 00:15:30.879 --rc geninfo_all_blocks=1 00:15:30.879 --rc geninfo_unexecuted_blocks=1 00:15:30.879 00:15:30.879 ' 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.879 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:30.880 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:30.880 Cannot find device "nvmf_init_br" 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:30.880 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:31.139 Cannot find device "nvmf_init_br2" 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:31.139 Cannot find device "nvmf_tgt_br" 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.139 Cannot find device "nvmf_tgt_br2" 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:31.139 Cannot find device "nvmf_init_br" 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:31.139 Cannot find device "nvmf_init_br2" 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:31.139 Cannot find device "nvmf_tgt_br" 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:31.139 Cannot find device "nvmf_tgt_br2" 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:31.139 Cannot find device "nvmf_br" 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:31.139 Cannot find device "nvmf_init_if" 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:31.139 Cannot find device "nvmf_init_if2" 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:31.139 13:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:31.397 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:31.397 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:31.397 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:15:31.397 00:15:31.397 --- 10.0.0.3 ping statistics --- 00:15:31.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.398 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:31.398 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:31.398 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:15:31.398 00:15:31.398 --- 10.0.0.4 ping statistics --- 00:15:31.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.398 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:31.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:31.398 00:15:31.398 --- 10.0.0.1 ping statistics --- 00:15:31.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.398 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:31.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:15:31.398 00:15:31.398 --- 10.0.0.2 ping statistics --- 00:15:31.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.398 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # return 0 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=74508 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 74508 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 74508 ']' 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:31.398 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:31.398 [2024-10-01 13:43:23.213992] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:15:31.398 [2024-10-01 13:43:23.214094] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.656 [2024-10-01 13:43:23.350932] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:31.656 [2024-10-01 13:43:23.419351] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.656 [2024-10-01 13:43:23.419417] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.656 [2024-10-01 13:43:23.419428] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.656 [2024-10-01 13:43:23.419436] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.656 [2024-10-01 13:43:23.419444] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.656 [2024-10-01 13:43:23.419608] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.656 [2024-10-01 13:43:23.419688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.656 [2024-10-01 13:43:23.420358] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.656 [2024-10-01 13:43:23.420404] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.656 [2024-10-01 13:43:23.458335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:31.929 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.929 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:15:31.929 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:31.929 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:31.929 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:31.929 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.929 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:31.929 13:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:32.494 13:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:32.494 13:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:32.753 13:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:32.753 13:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:33.010 13:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:33.011 13:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:33.011 13:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:33.011 13:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:33.011 13:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:33.576 [2024-10-01 13:43:25.203387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.576 13:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:33.834 13:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:33.834 13:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:34.091 13:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:34.091 13:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:34.349 13:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:34.914 [2024-10-01 13:43:26.505692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:34.915 13:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:35.172 13:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:35.172 13:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:35.172 13:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:35.173 13:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:36.103 Initializing NVMe Controllers 00:15:36.103 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:36.103 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:36.103 Initialization complete. Launching workers. 00:15:36.103 ======================================================== 00:15:36.103 Latency(us) 00:15:36.103 Device Information : IOPS MiB/s Average min max 00:15:36.103 PCIE (0000:00:10.0) NSID 1 from core 0: 26588.84 103.86 1203.42 331.30 4942.47 00:15:36.103 ======================================================== 00:15:36.103 Total : 26588.84 103.86 1203.42 331.30 4942.47 00:15:36.103 00:15:36.360 13:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:37.732 Initializing NVMe Controllers 00:15:37.732 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:37.732 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:37.732 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:37.732 Initialization complete. Launching workers. 00:15:37.732 ======================================================== 00:15:37.732 Latency(us) 00:15:37.732 Device Information : IOPS MiB/s Average min max 00:15:37.732 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3338.93 13.04 297.75 109.96 4308.13 00:15:37.732 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8185.84 7405.39 12023.86 00:15:37.732 ======================================================== 00:15:37.732 Total : 3461.92 13.52 578.01 109.96 12023.86 00:15:37.732 00:15:37.732 13:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:38.664 Initializing NVMe Controllers 00:15:38.664 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:38.664 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:38.664 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:38.664 Initialization complete. Launching workers. 00:15:38.664 ======================================================== 00:15:38.664 Latency(us) 00:15:38.664 Device Information : IOPS MiB/s Average min max 00:15:38.664 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7881.10 30.79 4068.52 611.84 13641.01 00:15:38.664 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3496.75 13.66 9250.97 5960.47 29488.10 00:15:38.664 ======================================================== 00:15:38.664 Total : 11377.85 44.44 5661.24 611.84 29488.10 00:15:38.664 00:15:38.921 13:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:38.921 13:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:41.450 Initializing NVMe Controllers 00:15:41.450 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:41.450 Controller IO queue size 128, less than required. 00:15:41.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:41.450 Controller IO queue size 128, less than required. 00:15:41.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:41.450 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:41.450 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:41.450 Initialization complete. Launching workers. 00:15:41.450 ======================================================== 00:15:41.450 Latency(us) 00:15:41.450 Device Information : IOPS MiB/s Average min max 00:15:41.450 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 751.69 187.92 176114.23 73560.42 251697.42 00:15:41.450 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 355.38 88.85 372983.05 145074.36 602943.23 00:15:41.450 ======================================================== 00:15:41.450 Total : 1107.07 276.77 239311.07 73560.42 602943.23 00:15:41.450 00:15:41.450 13:43:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:15:41.708 Initializing NVMe Controllers 00:15:41.708 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:41.709 Controller IO queue size 128, less than required. 00:15:41.709 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:41.709 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:41.709 Controller IO queue size 128, less than required. 00:15:41.709 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:41.709 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:41.709 WARNING: Some requested NVMe devices were skipped 00:15:41.709 No valid NVMe controllers or AIO or URING devices found 00:15:41.709 13:43:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:15:44.241 Initializing NVMe Controllers 00:15:44.241 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:44.241 Controller IO queue size 128, less than required. 00:15:44.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:44.241 Controller IO queue size 128, less than required. 00:15:44.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:44.241 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:44.241 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:44.241 Initialization complete. Launching workers. 00:15:44.241 00:15:44.241 ==================== 00:15:44.241 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:44.241 TCP transport: 00:15:44.241 polls: 23989 00:15:44.241 idle_polls: 19404 00:15:44.241 sock_completions: 4585 00:15:44.241 nvme_completions: 5449 00:15:44.241 submitted_requests: 8206 00:15:44.241 queued_requests: 1 00:15:44.241 00:15:44.241 ==================== 00:15:44.241 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:44.241 TCP transport: 00:15:44.241 polls: 26058 00:15:44.241 idle_polls: 21493 00:15:44.241 sock_completions: 4565 00:15:44.241 nvme_completions: 5185 00:15:44.241 submitted_requests: 7768 00:15:44.241 queued_requests: 1 00:15:44.241 ======================================================== 00:15:44.241 Latency(us) 00:15:44.241 Device Information : IOPS MiB/s Average min max 00:15:44.241 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1361.28 340.32 97653.41 34479.21 177199.98 00:15:44.241 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1295.32 323.83 99978.55 26228.00 205396.64 00:15:44.241 ======================================================== 00:15:44.241 Total : 2656.60 664.15 98787.11 26228.00 205396.64 00:15:44.241 00:15:44.241 13:43:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:44.241 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:44.807 rmmod nvme_tcp 00:15:44.807 rmmod nvme_fabrics 00:15:44.807 rmmod nvme_keyring 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 74508 ']' 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 74508 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 74508 ']' 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 74508 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74508 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:44.807 killing process with pid 74508 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74508' 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 74508 00:15:44.807 13:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 74508 00:15:45.374 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:45.374 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:45.374 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:45.374 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:45.374 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:45.374 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:15:45.374 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:15:45.374 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:45.374 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:45.374 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:45.374 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:45.374 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:45.374 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:45.633 00:15:45.633 real 0m14.881s 00:15:45.633 user 0m54.644s 00:15:45.633 sys 0m3.613s 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:45.633 ************************************ 00:15:45.633 END TEST nvmf_perf 00:15:45.633 ************************************ 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.633 ************************************ 00:15:45.633 START TEST nvmf_fio_host 00:15:45.633 ************************************ 00:15:45.633 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:45.892 * Looking for test storage... 00:15:45.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:45.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.893 --rc genhtml_branch_coverage=1 00:15:45.893 --rc genhtml_function_coverage=1 00:15:45.893 --rc genhtml_legend=1 00:15:45.893 --rc geninfo_all_blocks=1 00:15:45.893 --rc geninfo_unexecuted_blocks=1 00:15:45.893 00:15:45.893 ' 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:45.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.893 --rc genhtml_branch_coverage=1 00:15:45.893 --rc genhtml_function_coverage=1 00:15:45.893 --rc genhtml_legend=1 00:15:45.893 --rc geninfo_all_blocks=1 00:15:45.893 --rc geninfo_unexecuted_blocks=1 00:15:45.893 00:15:45.893 ' 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:45.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.893 --rc genhtml_branch_coverage=1 00:15:45.893 --rc genhtml_function_coverage=1 00:15:45.893 --rc genhtml_legend=1 00:15:45.893 --rc geninfo_all_blocks=1 00:15:45.893 --rc geninfo_unexecuted_blocks=1 00:15:45.893 00:15:45.893 ' 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:45.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.893 --rc genhtml_branch_coverage=1 00:15:45.893 --rc genhtml_function_coverage=1 00:15:45.893 --rc genhtml_legend=1 00:15:45.893 --rc geninfo_all_blocks=1 00:15:45.893 --rc geninfo_unexecuted_blocks=1 00:15:45.893 00:15:45.893 ' 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.893 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:45.894 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:45.894 Cannot find device "nvmf_init_br" 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:45.894 Cannot find device "nvmf_init_br2" 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:45.894 Cannot find device "nvmf_tgt_br" 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.894 Cannot find device "nvmf_tgt_br2" 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:45.894 Cannot find device "nvmf_init_br" 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:45.894 Cannot find device "nvmf_init_br2" 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:45.894 Cannot find device "nvmf_tgt_br" 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:45.894 Cannot find device "nvmf_tgt_br2" 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:45.894 Cannot find device "nvmf_br" 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:45.894 Cannot find device "nvmf_init_if" 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:45.894 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:46.153 Cannot find device "nvmf_init_if2" 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:46.153 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:46.153 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:15:46.153 00:15:46.153 --- 10.0.0.3 ping statistics --- 00:15:46.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.153 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:46.153 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:46.153 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:15:46.153 00:15:46.153 --- 10.0.0.4 ping statistics --- 00:15:46.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.153 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:46.153 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:46.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:46.154 00:15:46.154 --- 10.0.0.1 ping statistics --- 00:15:46.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.154 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:46.154 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:46.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:15:46.154 00:15:46.154 --- 10.0.0.2 ping statistics --- 00:15:46.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.154 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:46.154 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.154 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # return 0 00:15:46.154 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:46.154 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.154 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:46.154 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:46.154 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.154 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:46.154 13:43:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:46.154 13:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:46.154 13:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:46.154 13:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:46.154 13:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.412 13:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74965 00:15:46.412 13:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:46.412 13:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:46.412 13:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74965 00:15:46.412 13:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 74965 ']' 00:15:46.412 13:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.412 13:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:46.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.412 13:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.412 13:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:46.412 13:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.412 [2024-10-01 13:43:38.078097] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:15:46.412 [2024-10-01 13:43:38.078195] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.412 [2024-10-01 13:43:38.214371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.670 [2024-10-01 13:43:38.276579] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.670 [2024-10-01 13:43:38.276656] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.670 [2024-10-01 13:43:38.276676] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.670 [2024-10-01 13:43:38.276690] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.670 [2024-10-01 13:43:38.276701] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.670 [2024-10-01 13:43:38.276825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.670 [2024-10-01 13:43:38.277312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.670 [2024-10-01 13:43:38.277387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.670 [2024-10-01 13:43:38.277394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.670 [2024-10-01 13:43:38.308368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:47.605 13:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:47.605 13:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:15:47.605 13:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:47.605 [2024-10-01 13:43:39.372237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.605 13:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:47.605 13:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:47.605 13:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.605 13:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:47.864 Malloc1 00:15:47.864 13:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:48.122 13:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:48.380 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:48.637 [2024-10-01 13:43:40.476230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:48.637 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:49.204 13:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:49.204 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:49.204 fio-3.35 00:15:49.204 Starting 1 thread 00:15:51.732 00:15:51.732 test: (groupid=0, jobs=1): err= 0: pid=75043: Tue Oct 1 13:43:43 2024 00:15:51.732 read: IOPS=3938, BW=15.4MiB/s (16.1MB/s)(30.9MiB/2007msec) 00:15:51.732 slat (usec): min=2, max=303, avg= 2.67, stdev= 4.49 00:15:51.732 clat (usec): min=2610, max=27714, avg=17092.96, stdev=6897.77 00:15:51.732 lat (usec): min=2653, max=27717, avg=17095.62, stdev=6897.52 00:15:51.732 clat percentiles (usec): 00:15:51.732 | 1.00th=[ 6521], 5.00th=[ 7177], 10.00th=[ 7373], 20.00th=[ 7832], 00:15:51.732 | 30.00th=[ 8455], 40.00th=[19792], 50.00th=[20841], 60.00th=[21627], 00:15:51.732 | 70.00th=[22152], 80.00th=[22676], 90.00th=[23462], 95.00th=[24249], 00:15:51.732 | 99.00th=[25297], 99.50th=[26084], 99.90th=[27132], 99.95th=[27395], 00:15:51.732 | 99.99th=[27657] 00:15:51.732 bw ( KiB/s): min=12192, max=21400, per=99.49%, avg=15674.00, stdev=4331.39, samples=4 00:15:51.732 iops : min= 3048, max= 5350, avg=3918.50, stdev=1082.85, samples=4 00:15:51.732 write: IOPS=3956, BW=15.5MiB/s (16.2MB/s)(31.0MiB/2007msec); 0 zone resets 00:15:51.732 slat (usec): min=2, max=262, avg= 2.77, stdev= 3.19 00:15:51.732 clat (usec): min=2454, max=25029, avg=15234.77, stdev=6143.09 00:15:51.732 lat (usec): min=2467, max=25031, avg=15237.54, stdev=6142.87 00:15:51.732 clat percentiles (usec): 00:15:51.732 | 1.00th=[ 5932], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 7111], 00:15:51.732 | 30.00th=[ 7635], 40.00th=[17695], 50.00th=[18744], 60.00th=[19268], 00:15:51.732 | 70.00th=[19792], 80.00th=[20317], 90.00th=[21103], 95.00th=[21627], 00:15:51.732 | 99.00th=[22938], 99.50th=[23725], 99.90th=[24773], 99.95th=[24773], 00:15:51.732 | 99.99th=[25035] 00:15:51.732 bw ( KiB/s): min=12032, max=22344, per=99.60%, avg=15762.00, stdev=4857.04, samples=4 00:15:51.732 iops : min= 3008, max= 5586, avg=3940.50, stdev=1214.26, samples=4 00:15:51.732 lat (msec) : 4=0.18%, 10=34.09%, 20=23.12%, 50=42.61% 00:15:51.732 cpu : usr=77.42%, sys=19.09%, ctx=11, majf=0, minf=6 00:15:51.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:51.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:51.732 issued rwts: total=7905,7940,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:51.732 00:15:51.732 Run status group 0 (all jobs): 00:15:51.732 READ: bw=15.4MiB/s (16.1MB/s), 15.4MiB/s-15.4MiB/s (16.1MB/s-16.1MB/s), io=30.9MiB (32.4MB), run=2007-2007msec 00:15:51.732 WRITE: bw=15.5MiB/s (16.2MB/s), 15.5MiB/s-15.5MiB/s (16.2MB/s-16.2MB/s), io=31.0MiB (32.5MB), run=2007-2007msec 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:51.732 13:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:51.732 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:51.732 fio-3.35 00:15:51.732 Starting 1 thread 00:15:54.260 00:15:54.260 test: (groupid=0, jobs=1): err= 0: pid=75092: Tue Oct 1 13:43:45 2024 00:15:54.260 read: IOPS=7597, BW=119MiB/s (124MB/s)(238MiB/2006msec) 00:15:54.260 slat (usec): min=3, max=119, avg= 4.02, stdev= 1.73 00:15:54.260 clat (usec): min=2249, max=28245, avg=9518.06, stdev=3302.74 00:15:54.260 lat (usec): min=2253, max=28248, avg=9522.07, stdev=3302.79 00:15:54.260 clat percentiles (usec): 00:15:54.260 | 1.00th=[ 4359], 5.00th=[ 5211], 10.00th=[ 5800], 20.00th=[ 6718], 00:15:54.260 | 30.00th=[ 7504], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9896], 00:15:54.260 | 70.00th=[10945], 80.00th=[11994], 90.00th=[13435], 95.00th=[15401], 00:15:54.260 | 99.00th=[20317], 99.50th=[21103], 99.90th=[21890], 99.95th=[22152], 00:15:54.260 | 99.99th=[25822] 00:15:54.260 bw ( KiB/s): min=60128, max=64800, per=51.27%, avg=62320.00, stdev=2078.85, samples=4 00:15:54.260 iops : min= 3758, max= 4050, avg=3895.00, stdev=129.93, samples=4 00:15:54.260 write: IOPS=4370, BW=68.3MiB/s (71.6MB/s)(128MiB/1868msec); 0 zone resets 00:15:54.260 slat (usec): min=37, max=220, avg=39.67, stdev= 5.30 00:15:54.260 clat (usec): min=3837, max=29661, avg=13050.18, stdev=2791.73 00:15:54.260 lat (usec): min=3875, max=29699, avg=13089.85, stdev=2792.10 00:15:54.260 clat percentiles (usec): 00:15:54.260 | 1.00th=[ 7635], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10945], 00:15:54.260 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12649], 60.00th=[13304], 00:15:54.260 | 70.00th=[14091], 80.00th=[15139], 90.00th=[16712], 95.00th=[18482], 00:15:54.260 | 99.00th=[21103], 99.50th=[22414], 99.90th=[23200], 99.95th=[24511], 00:15:54.260 | 99.99th=[29754] 00:15:54.260 bw ( KiB/s): min=62624, max=67360, per=92.76%, avg=64864.00, stdev=2122.16, samples=4 00:15:54.260 iops : min= 3914, max= 4210, avg=4054.00, stdev=132.63, samples=4 00:15:54.260 lat (msec) : 4=0.27%, 10=43.30%, 20=54.74%, 50=1.68% 00:15:54.260 cpu : usr=81.70%, sys=13.57%, ctx=39, majf=0, minf=5 00:15:54.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:54.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:54.260 issued rwts: total=15241,8164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:54.260 00:15:54.260 Run status group 0 (all jobs): 00:15:54.260 READ: bw=119MiB/s (124MB/s), 119MiB/s-119MiB/s (124MB/s-124MB/s), io=238MiB (250MB), run=2006-2006msec 00:15:54.260 WRITE: bw=68.3MiB/s (71.6MB/s), 68.3MiB/s-68.3MiB/s (71.6MB/s-71.6MB/s), io=128MiB (134MB), run=1868-1868msec 00:15:54.260 13:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.260 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:54.260 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:54.260 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:54.260 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:54.260 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:54.260 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:54.519 rmmod nvme_tcp 00:15:54.519 rmmod nvme_fabrics 00:15:54.519 rmmod nvme_keyring 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 74965 ']' 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 74965 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 74965 ']' 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 74965 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74965 00:15:54.519 killing process with pid 74965 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74965' 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 74965 00:15:54.519 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 74965 00:15:54.777 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:54.777 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:54.777 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:54.777 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:15:54.777 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:15:54.777 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:15:54.777 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:54.777 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:54.777 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.778 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:15:55.036 00:15:55.036 real 0m9.190s 00:15:55.036 user 0m37.315s 00:15:55.036 sys 0m2.241s 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.036 ************************************ 00:15:55.036 END TEST nvmf_fio_host 00:15:55.036 ************************************ 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.036 ************************************ 00:15:55.036 START TEST nvmf_failover 00:15:55.036 ************************************ 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:55.036 * Looking for test storage... 00:15:55.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:55.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.036 --rc genhtml_branch_coverage=1 00:15:55.036 --rc genhtml_function_coverage=1 00:15:55.036 --rc genhtml_legend=1 00:15:55.036 --rc geninfo_all_blocks=1 00:15:55.036 --rc geninfo_unexecuted_blocks=1 00:15:55.036 00:15:55.036 ' 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:55.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.036 --rc genhtml_branch_coverage=1 00:15:55.036 --rc genhtml_function_coverage=1 00:15:55.036 --rc genhtml_legend=1 00:15:55.036 --rc geninfo_all_blocks=1 00:15:55.036 --rc geninfo_unexecuted_blocks=1 00:15:55.036 00:15:55.036 ' 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:55.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.036 --rc genhtml_branch_coverage=1 00:15:55.036 --rc genhtml_function_coverage=1 00:15:55.036 --rc genhtml_legend=1 00:15:55.036 --rc geninfo_all_blocks=1 00:15:55.036 --rc geninfo_unexecuted_blocks=1 00:15:55.036 00:15:55.036 ' 00:15:55.036 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:55.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.036 --rc genhtml_branch_coverage=1 00:15:55.036 --rc genhtml_function_coverage=1 00:15:55.036 --rc genhtml_legend=1 00:15:55.036 --rc geninfo_all_blocks=1 00:15:55.037 --rc geninfo_unexecuted_blocks=1 00:15:55.037 00:15:55.037 ' 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b7d6042-0a58-4103-9990-589a1a785035 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=2b7d6042-0a58-4103-9990-589a1a785035 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:55.037 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:55.037 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:55.295 Cannot find device "nvmf_init_br" 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:55.295 Cannot find device "nvmf_init_br2" 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:55.295 Cannot find device "nvmf_tgt_br" 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.295 Cannot find device "nvmf_tgt_br2" 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:55.295 Cannot find device "nvmf_init_br" 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:55.295 Cannot find device "nvmf_init_br2" 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:55.295 Cannot find device "nvmf_tgt_br" 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:55.295 Cannot find device "nvmf_tgt_br2" 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:55.295 Cannot find device "nvmf_br" 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:55.295 13:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:55.295 Cannot find device "nvmf_init_if" 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:55.295 Cannot find device "nvmf_init_if2" 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:55.295 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:55.554 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:55.554 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:15:55.554 00:15:55.554 --- 10.0.0.3 ping statistics --- 00:15:55.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.554 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:55.554 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:55.554 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:15:55.554 00:15:55.554 --- 10.0.0.4 ping statistics --- 00:15:55.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.554 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:55.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:55.554 00:15:55.554 --- 10.0.0.1 ping statistics --- 00:15:55.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.554 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:55.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:15:55.554 00:15:55.554 --- 10.0.0.2 ping statistics --- 00:15:55.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.554 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # return 0 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:55.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=75364 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 75364 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75364 ']' 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:55.554 13:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:55.554 [2024-10-01 13:43:47.364108] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:15:55.554 [2024-10-01 13:43:47.364408] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.813 [2024-10-01 13:43:47.503848] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:55.813 [2024-10-01 13:43:47.569110] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.813 [2024-10-01 13:43:47.569359] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.813 [2024-10-01 13:43:47.569493] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.813 [2024-10-01 13:43:47.569642] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.813 [2024-10-01 13:43:47.569679] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.813 [2024-10-01 13:43:47.569927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.813 [2024-10-01 13:43:47.570005] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.813 [2024-10-01 13:43:47.570009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.813 [2024-10-01 13:43:47.600198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:56.748 13:43:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:56.748 13:43:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:56.748 13:43:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:56.748 13:43:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:56.748 13:43:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:56.748 13:43:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.748 13:43:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:57.007 [2024-10-01 13:43:48.688736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.007 13:43:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:57.325 Malloc0 00:15:57.325 13:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:57.583 13:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:58.149 13:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:58.407 [2024-10-01 13:43:50.012134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:58.407 13:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:58.666 [2024-10-01 13:43:50.300319] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:58.666 13:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:58.925 [2024-10-01 13:43:50.689238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:58.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:58.925 13:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75429 00:15:58.925 13:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:58.925 13:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:58.925 13:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75429 /var/tmp/bdevperf.sock 00:15:58.925 13:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75429 ']' 00:15:58.925 13:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:58.925 13:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.925 13:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:58.925 13:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.925 13:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:00.299 13:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.299 13:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:16:00.299 13:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:00.299 NVMe0n1 00:16:00.299 13:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:00.882 NVMe0n1 00:16:00.882 13:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75453 00:16:00.882 13:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:00.882 13:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:01.814 13:43:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:02.071 13:43:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:05.352 13:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:05.352 NVMe0n1 00:16:05.352 13:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:05.918 13:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:09.200 13:44:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:09.200 [2024-10-01 13:44:00.892274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:09.200 13:44:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:10.156 13:44:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:10.419 13:44:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75453 00:16:17.061 { 00:16:17.061 "results": [ 00:16:17.061 { 00:16:17.061 "job": "NVMe0n1", 00:16:17.061 "core_mask": "0x1", 00:16:17.061 "workload": "verify", 00:16:17.061 "status": "finished", 00:16:17.061 "verify_range": { 00:16:17.061 "start": 0, 00:16:17.061 "length": 16384 00:16:17.061 }, 00:16:17.061 "queue_depth": 128, 00:16:17.061 "io_size": 4096, 00:16:17.061 "runtime": 15.009424, 00:16:17.061 "iops": 8683.211294450739, 00:16:17.061 "mibps": 33.9187941189482, 00:16:17.061 "io_failed": 0, 00:16:17.061 "io_timeout": 0, 00:16:17.061 "avg_latency_us": 14706.956808660532, 00:16:17.061 "min_latency_us": 2055.447272727273, 00:16:17.061 "max_latency_us": 19184.174545454545 00:16:17.061 } 00:16:17.061 ], 00:16:17.061 "core_count": 1 00:16:17.061 } 00:16:17.061 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75429 00:16:17.061 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75429 ']' 00:16:17.061 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75429 00:16:17.061 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:16:17.061 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:17.061 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75429 00:16:17.061 killing process with pid 75429 00:16:17.061 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:17.061 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:17.061 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75429' 00:16:17.061 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75429 00:16:17.061 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75429 00:16:17.061 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:17.061 [2024-10-01 13:43:50.775476] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:16:17.061 [2024-10-01 13:43:50.775690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75429 ] 00:16:17.061 [2024-10-01 13:43:50.926782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.061 [2024-10-01 13:43:50.993285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.061 [2024-10-01 13:43:51.030256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:17.061 Running I/O for 15 seconds... 00:16:17.061 7136.00 IOPS, 27.88 MiB/s [2024-10-01 13:43:53.783470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.061 [2024-10-01 13:43:53.783561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.783597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.062 [2024-10-01 13:43:53.783615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.783632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.062 [2024-10-01 13:43:53.783647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.783663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.062 [2024-10-01 13:43:53.783678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.783694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.062 [2024-10-01 13:43:53.783709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.783725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.062 [2024-10-01 13:43:53.783739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.783755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.062 [2024-10-01 13:43:53.783769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.783785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.062 [2024-10-01 13:43:53.783799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.783816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.783830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.783846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.783861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.783893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.783941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.783959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.783974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.783996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.062 [2024-10-01 13:43:53.784698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.062 [2024-10-01 13:43:53.784731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.062 [2024-10-01 13:43:53.784769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.062 [2024-10-01 13:43:53.784808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.062 [2024-10-01 13:43:53.784841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.062 [2024-10-01 13:43:53.784870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.062 [2024-10-01 13:43:53.784900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.062 [2024-10-01 13:43:53.784930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.062 [2024-10-01 13:43:53.784946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.062 [2024-10-01 13:43:53.784961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.784977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.784992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.063 [2024-10-01 13:43:53.785595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.063 [2024-10-01 13:43:53.785624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.063 [2024-10-01 13:43:53.785655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.063 [2024-10-01 13:43:53.785685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.063 [2024-10-01 13:43:53.785715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.063 [2024-10-01 13:43:53.785760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.063 [2024-10-01 13:43:53.785790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.063 [2024-10-01 13:43:53.785820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.785971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.785987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.786002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.786020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.786035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.786054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.786077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.786095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.786111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.786132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.786154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.786171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.786185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.786201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.786215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.786231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.786245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.786261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.786276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.786292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.786306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.063 [2024-10-01 13:43:53.786322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.063 [2024-10-01 13:43:53.786337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.786368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.786399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.786429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.786459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.786490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.786521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.786588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.786622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.786654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.786685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.786716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.786749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.786780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.786811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.786854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.786884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.786914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.786944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.786960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.786974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.787015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.787046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.787076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.787110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.064 [2024-10-01 13:43:53.787146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.787185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.787215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.787246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.787276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.787307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.787338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.787368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.787406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.787438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.787469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.787499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.787529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.787592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.787624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.064 [2024-10-01 13:43:53.787642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.064 [2024-10-01 13:43:53.787658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.065 [2024-10-01 13:43:53.787674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e770 is same with the state(6) to be set 00:16:17.065 [2024-10-01 13:43:53.787692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.065 [2024-10-01 13:43:53.787703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.065 [2024-10-01 13:43:53.787714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65976 len:8 PRP1 0x0 PRP2 0x0 00:16:17.065 [2024-10-01 13:43:53.787728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.065 [2024-10-01 13:43:53.787743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.065 [2024-10-01 13:43:53.787754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.065 [2024-10-01 13:43:53.787766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66304 len:8 PRP1 0x0 PRP2 0x0 00:16:17.065 [2024-10-01 13:43:53.787780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.065 [2024-10-01 13:43:53.787796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.065 [2024-10-01 13:43:53.787807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.065 [2024-10-01 13:43:53.787824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66312 len:8 PRP1 0x0 PRP2 0x0 00:16:17.065 [2024-10-01 13:43:53.787850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.065 [2024-10-01 13:43:53.787865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.065 [2024-10-01 13:43:53.787893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.065 [2024-10-01 13:43:53.787904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66320 len:8 PRP1 0x0 PRP2 0x0 00:16:17.065 [2024-10-01 13:43:53.787918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.065 [2024-10-01 13:43:53.787933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.065 [2024-10-01 13:43:53.787944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.065 [2024-10-01 13:43:53.787954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66328 len:8 PRP1 0x0 PRP2 0x0 00:16:17.065 [2024-10-01 13:43:53.787967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.065 [2024-10-01 13:43:53.787982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.065 [2024-10-01 13:43:53.787993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.065 [2024-10-01 13:43:53.788007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66336 len:8 PRP1 0x0 PRP2 0x0 00:16:17.065 [2024-10-01 13:43:53.788027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.065 [2024-10-01 13:43:53.788046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.065 [2024-10-01 13:43:53.788063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.065 [2024-10-01 13:43:53.788082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66344 len:8 PRP1 0x0 PRP2 0x0 00:16:17.065 [2024-10-01 13:43:53.788107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.065 [2024-10-01 13:43:53.788132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.065 [2024-10-01 13:43:53.788149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.065 [2024-10-01 13:43:53.788169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66352 len:8 PRP1 0x0 PRP2 0x0 00:16:17.065 [2024-10-01 13:43:53.788188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.065 [2024-10-01 13:43:53.788207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.065 [2024-10-01 13:43:53.788227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.065 [2024-10-01 13:43:53.788239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66360 len:8 PRP1 0x0 PRP2 0x0 00:16:17.065 [2024-10-01 13:43:53.788253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.065 [2024-10-01 13:43:53.788305] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb3e770 was disconnected and freed. reset controller. 00:16:17.065 [2024-10-01 13:43:53.788435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.065 [2024-10-01 13:43:53.788464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.065 [2024-10-01 13:43:53.788481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.065 [2024-10-01 13:43:53.788495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.065 [2024-10-01 13:43:53.788524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.065 [2024-10-01 13:43:53.788560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.065 [2024-10-01 13:43:53.788584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.065 [2024-10-01 13:43:53.788598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.065 [2024-10-01 13:43:53.788612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.065 [2024-10-01 13:43:53.789679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.065 [2024-10-01 13:43:53.789723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.065 [2024-10-01 13:43:53.790174] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.065 [2024-10-01 13:43:53.790209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.065 [2024-10-01 13:43:53.790228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.065 [2024-10-01 13:43:53.790264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.065 [2024-10-01 13:43:53.790298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.065 [2024-10-01 13:43:53.790315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.065 [2024-10-01 13:43:53.790331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.065 [2024-10-01 13:43:53.790367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.065 [2024-10-01 13:43:53.801348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.065 [2024-10-01 13:43:53.801487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.065 [2024-10-01 13:43:53.801521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.065 [2024-10-01 13:43:53.801558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.065 [2024-10-01 13:43:53.801775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.065 [2024-10-01 13:43:53.801927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.065 [2024-10-01 13:43:53.801956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.065 [2024-10-01 13:43:53.801973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.065 [2024-10-01 13:43:53.802031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.065 [2024-10-01 13:43:53.812803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.065 [2024-10-01 13:43:53.812946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.065 [2024-10-01 13:43:53.812981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.065 [2024-10-01 13:43:53.813000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.065 [2024-10-01 13:43:53.813036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.065 [2024-10-01 13:43:53.813094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.065 [2024-10-01 13:43:53.813114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.065 [2024-10-01 13:43:53.813129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.065 [2024-10-01 13:43:53.813162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.065 [2024-10-01 13:43:53.822937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.065 [2024-10-01 13:43:53.823187] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.065 [2024-10-01 13:43:53.823230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.065 [2024-10-01 13:43:53.823253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.065 [2024-10-01 13:43:53.823296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.065 [2024-10-01 13:43:53.823331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.065 [2024-10-01 13:43:53.823349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.065 [2024-10-01 13:43:53.823366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.065 [2024-10-01 13:43:53.823400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.065 [2024-10-01 13:43:53.834519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.065 [2024-10-01 13:43:53.834750] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.065 [2024-10-01 13:43:53.834789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.065 [2024-10-01 13:43:53.834809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.065 [2024-10-01 13:43:53.835620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.065 [2024-10-01 13:43:53.835829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.065 [2024-10-01 13:43:53.835882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.066 [2024-10-01 13:43:53.835907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.066 [2024-10-01 13:43:53.835962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.066 [2024-10-01 13:43:53.844705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.066 [2024-10-01 13:43:53.844905] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.066 [2024-10-01 13:43:53.844944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.066 [2024-10-01 13:43:53.844964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.066 [2024-10-01 13:43:53.845003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.066 [2024-10-01 13:43:53.845050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.066 [2024-10-01 13:43:53.845068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.066 [2024-10-01 13:43:53.845084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.066 [2024-10-01 13:43:53.845166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.066 [2024-10-01 13:43:53.854866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.066 [2024-10-01 13:43:53.856310] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.066 [2024-10-01 13:43:53.856363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.066 [2024-10-01 13:43:53.856386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.066 [2024-10-01 13:43:53.857278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.066 [2024-10-01 13:43:53.857443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.066 [2024-10-01 13:43:53.857483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.066 [2024-10-01 13:43:53.857503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.066 [2024-10-01 13:43:53.857558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.066 [2024-10-01 13:43:53.866479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.066 [2024-10-01 13:43:53.866718] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.066 [2024-10-01 13:43:53.866757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.066 [2024-10-01 13:43:53.866778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.066 [2024-10-01 13:43:53.866819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.066 [2024-10-01 13:43:53.866857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.066 [2024-10-01 13:43:53.866875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.066 [2024-10-01 13:43:53.866892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.066 [2024-10-01 13:43:53.866927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.066 [2024-10-01 13:43:53.878286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.066 [2024-10-01 13:43:53.879098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.066 [2024-10-01 13:43:53.879148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.066 [2024-10-01 13:43:53.879172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.066 [2024-10-01 13:43:53.879286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.066 [2024-10-01 13:43:53.879329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.066 [2024-10-01 13:43:53.879348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.066 [2024-10-01 13:43:53.879363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.066 [2024-10-01 13:43:53.879399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.066 [2024-10-01 13:43:53.889370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.066 [2024-10-01 13:43:53.889509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.066 [2024-10-01 13:43:53.889556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.066 [2024-10-01 13:43:53.889609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.066 [2024-10-01 13:43:53.889647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.066 [2024-10-01 13:43:53.889681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.066 [2024-10-01 13:43:53.889699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.066 [2024-10-01 13:43:53.889714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.066 [2024-10-01 13:43:53.889746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.066 [2024-10-01 13:43:53.900771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.066 [2024-10-01 13:43:53.900936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.066 [2024-10-01 13:43:53.900986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.066 [2024-10-01 13:43:53.901008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.066 [2024-10-01 13:43:53.901044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.066 [2024-10-01 13:43:53.901077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.066 [2024-10-01 13:43:53.901095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.066 [2024-10-01 13:43:53.901110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.066 [2024-10-01 13:43:53.901143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.066 [2024-10-01 13:43:53.911285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.066 [2024-10-01 13:43:53.911488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.066 [2024-10-01 13:43:53.911525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.066 [2024-10-01 13:43:53.911565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.066 [2024-10-01 13:43:53.912562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.066 [2024-10-01 13:43:53.912797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.066 [2024-10-01 13:43:53.912834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.066 [2024-10-01 13:43:53.912854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.066 [2024-10-01 13:43:53.912946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.066 [2024-10-01 13:43:53.922358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.066 [2024-10-01 13:43:53.922509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.066 [2024-10-01 13:43:53.922565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.066 [2024-10-01 13:43:53.922589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.066 [2024-10-01 13:43:53.922627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.066 [2024-10-01 13:43:53.922661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.066 [2024-10-01 13:43:53.922704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.066 [2024-10-01 13:43:53.922721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.066 [2024-10-01 13:43:53.922755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.066 [2024-10-01 13:43:53.932854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.067 [2024-10-01 13:43:53.933079] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.067 [2024-10-01 13:43:53.933119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.067 [2024-10-01 13:43:53.933140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.067 [2024-10-01 13:43:53.933179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.067 [2024-10-01 13:43:53.933213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.067 [2024-10-01 13:43:53.933232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.067 [2024-10-01 13:43:53.933248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.067 [2024-10-01 13:43:53.933281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.067 [2024-10-01 13:43:53.944357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.067 [2024-10-01 13:43:53.944605] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.067 [2024-10-01 13:43:53.944646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.067 [2024-10-01 13:43:53.944668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.067 [2024-10-01 13:43:53.944708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.067 [2024-10-01 13:43:53.944743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.067 [2024-10-01 13:43:53.944761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.067 [2024-10-01 13:43:53.944777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.067 [2024-10-01 13:43:53.944813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.067 [2024-10-01 13:43:53.955020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.067 [2024-10-01 13:43:53.955214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.067 [2024-10-01 13:43:53.955251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.067 [2024-10-01 13:43:53.955271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.067 [2024-10-01 13:43:53.955307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.067 [2024-10-01 13:43:53.956323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.067 [2024-10-01 13:43:53.956369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.067 [2024-10-01 13:43:53.956397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.067 [2024-10-01 13:43:53.956629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.067 [2024-10-01 13:43:53.966659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.067 [2024-10-01 13:43:53.966928] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.067 [2024-10-01 13:43:53.966967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.067 [2024-10-01 13:43:53.966987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.067 [2024-10-01 13:43:53.967026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.067 [2024-10-01 13:43:53.967080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.067 [2024-10-01 13:43:53.967102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.067 [2024-10-01 13:43:53.967118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.067 [2024-10-01 13:43:53.967153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.067 [2024-10-01 13:43:53.977583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.067 [2024-10-01 13:43:53.977769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.067 [2024-10-01 13:43:53.977807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.067 [2024-10-01 13:43:53.977827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.067 [2024-10-01 13:43:53.977863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.067 [2024-10-01 13:43:53.977897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.067 [2024-10-01 13:43:53.977915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.067 [2024-10-01 13:43:53.977931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.067 [2024-10-01 13:43:53.977965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.067 [2024-10-01 13:43:53.989026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.067 [2024-10-01 13:43:53.989337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.067 [2024-10-01 13:43:53.989384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.067 [2024-10-01 13:43:53.989406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.067 [2024-10-01 13:43:53.989450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.067 [2024-10-01 13:43:53.989486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.067 [2024-10-01 13:43:53.989505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.067 [2024-10-01 13:43:53.989519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.067 [2024-10-01 13:43:53.989569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.067 [2024-10-01 13:43:53.999418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.067 [2024-10-01 13:43:53.999567] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.067 [2024-10-01 13:43:53.999609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.067 [2024-10-01 13:43:53.999630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.067 [2024-10-01 13:43:53.999695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.067 [2024-10-01 13:43:54.000655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.067 [2024-10-01 13:43:54.000696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.067 [2024-10-01 13:43:54.000714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.067 [2024-10-01 13:43:54.000911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.067 [2024-10-01 13:43:54.010511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.067 [2024-10-01 13:43:54.010652] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.067 [2024-10-01 13:43:54.010687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.067 [2024-10-01 13:43:54.010707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.067 [2024-10-01 13:43:54.010747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.067 [2024-10-01 13:43:54.010780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.067 [2024-10-01 13:43:54.010798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.067 [2024-10-01 13:43:54.010812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.067 [2024-10-01 13:43:54.010845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.067 [2024-10-01 13:43:54.020795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.067 [2024-10-01 13:43:54.020964] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.067 [2024-10-01 13:43:54.021024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.067 [2024-10-01 13:43:54.021053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.067 [2024-10-01 13:43:54.021099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.067 [2024-10-01 13:43:54.021140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.067 [2024-10-01 13:43:54.021163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.067 [2024-10-01 13:43:54.021183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.067 [2024-10-01 13:43:54.021223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.067 [2024-10-01 13:43:54.032680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.067 [2024-10-01 13:43:54.033066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.067 [2024-10-01 13:43:54.033125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.067 [2024-10-01 13:43:54.033151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.067 [2024-10-01 13:43:54.033200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.067 [2024-10-01 13:43:54.033236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.067 [2024-10-01 13:43:54.033254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.067 [2024-10-01 13:43:54.033301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.067 [2024-10-01 13:43:54.033340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.068 [2024-10-01 13:43:54.043660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.068 [2024-10-01 13:43:54.043838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.068 [2024-10-01 13:43:54.043889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.068 [2024-10-01 13:43:54.043912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.068 [2024-10-01 13:43:54.044892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.068 [2024-10-01 13:43:54.045114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.068 [2024-10-01 13:43:54.045160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.068 [2024-10-01 13:43:54.045179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.068 [2024-10-01 13:43:54.045267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.068 [2024-10-01 13:43:54.055028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.068 [2024-10-01 13:43:54.055160] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.068 [2024-10-01 13:43:54.055201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.068 [2024-10-01 13:43:54.055221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.068 [2024-10-01 13:43:54.055256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.068 [2024-10-01 13:43:54.055289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.068 [2024-10-01 13:43:54.055306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.068 [2024-10-01 13:43:54.055320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.068 [2024-10-01 13:43:54.055362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.068 [2024-10-01 13:43:54.065446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.068 [2024-10-01 13:43:54.065593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.068 [2024-10-01 13:43:54.065629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.068 [2024-10-01 13:43:54.065649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.068 [2024-10-01 13:43:54.065685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.068 [2024-10-01 13:43:54.065718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.068 [2024-10-01 13:43:54.065737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.068 [2024-10-01 13:43:54.065751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.068 [2024-10-01 13:43:54.065784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.068 [2024-10-01 13:43:54.076727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.068 [2024-10-01 13:43:54.076869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.068 [2024-10-01 13:43:54.076942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.068 [2024-10-01 13:43:54.076965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.068 [2024-10-01 13:43:54.077016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.068 [2024-10-01 13:43:54.077053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.068 [2024-10-01 13:43:54.077071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.068 [2024-10-01 13:43:54.077086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.068 [2024-10-01 13:43:54.077129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.068 [2024-10-01 13:43:54.087096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.068 [2024-10-01 13:43:54.087223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.068 [2024-10-01 13:43:54.087258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.068 [2024-10-01 13:43:54.087277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.068 [2024-10-01 13:43:54.087312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.068 [2024-10-01 13:43:54.088296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.068 [2024-10-01 13:43:54.088340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.068 [2024-10-01 13:43:54.088358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.068 [2024-10-01 13:43:54.088573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.068 [2024-10-01 13:43:54.098074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.068 [2024-10-01 13:43:54.098216] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.068 [2024-10-01 13:43:54.098261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.068 [2024-10-01 13:43:54.098282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.068 [2024-10-01 13:43:54.098317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.068 [2024-10-01 13:43:54.098350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.068 [2024-10-01 13:43:54.098376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.068 [2024-10-01 13:43:54.098395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.068 [2024-10-01 13:43:54.098429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.068 [2024-10-01 13:43:54.108563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.068 [2024-10-01 13:43:54.108685] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.068 [2024-10-01 13:43:54.108733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.068 [2024-10-01 13:43:54.108759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.068 [2024-10-01 13:43:54.109027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.068 [2024-10-01 13:43:54.109131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.068 [2024-10-01 13:43:54.109155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.068 [2024-10-01 13:43:54.109171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.068 [2024-10-01 13:43:54.109204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.068 [2024-10-01 13:43:54.118703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.068 [2024-10-01 13:43:54.118921] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.068 [2024-10-01 13:43:54.118960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.068 [2024-10-01 13:43:54.118981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.068 [2024-10-01 13:43:54.120276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.068 [2024-10-01 13:43:54.121211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.068 [2024-10-01 13:43:54.121254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.068 [2024-10-01 13:43:54.121274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.068 [2024-10-01 13:43:54.121504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.068 [2024-10-01 13:43:54.128873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.068 [2024-10-01 13:43:54.129282] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.068 [2024-10-01 13:43:54.129331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.068 [2024-10-01 13:43:54.129353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.068 [2024-10-01 13:43:54.129509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.068 [2024-10-01 13:43:54.129655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.068 [2024-10-01 13:43:54.129686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.068 [2024-10-01 13:43:54.129704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.068 [2024-10-01 13:43:54.129747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.068 [2024-10-01 13:43:54.139007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.068 [2024-10-01 13:43:54.139211] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.068 [2024-10-01 13:43:54.139250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.068 [2024-10-01 13:43:54.139270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.068 [2024-10-01 13:43:54.139307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.068 [2024-10-01 13:43:54.139340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.068 [2024-10-01 13:43:54.139357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.068 [2024-10-01 13:43:54.139374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.068 [2024-10-01 13:43:54.139442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.068 [2024-10-01 13:43:54.149328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.068 [2024-10-01 13:43:54.149549] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.068 [2024-10-01 13:43:54.149587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.069 [2024-10-01 13:43:54.149607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.069 [2024-10-01 13:43:54.150396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.069 [2024-10-01 13:43:54.150617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.069 [2024-10-01 13:43:54.150653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.069 [2024-10-01 13:43:54.150672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.069 [2024-10-01 13:43:54.150717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.069 [2024-10-01 13:43:54.159550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.069 [2024-10-01 13:43:54.159769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.069 [2024-10-01 13:43:54.159814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.069 [2024-10-01 13:43:54.159835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.069 [2024-10-01 13:43:54.159903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.069 [2024-10-01 13:43:54.159961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.069 [2024-10-01 13:43:54.159984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.069 [2024-10-01 13:43:54.160000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.069 [2024-10-01 13:43:54.160034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.069 [2024-10-01 13:43:54.171135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.069 [2024-10-01 13:43:54.171359] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.069 [2024-10-01 13:43:54.171398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.069 [2024-10-01 13:43:54.171418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.069 [2024-10-01 13:43:54.171456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.069 [2024-10-01 13:43:54.171490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.069 [2024-10-01 13:43:54.171509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.069 [2024-10-01 13:43:54.171526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.069 [2024-10-01 13:43:54.171578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.069 [2024-10-01 13:43:54.182603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.069 [2024-10-01 13:43:54.182823] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.069 [2024-10-01 13:43:54.182862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.069 [2024-10-01 13:43:54.182916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.069 [2024-10-01 13:43:54.182957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.069 [2024-10-01 13:43:54.182992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.069 [2024-10-01 13:43:54.183010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.069 [2024-10-01 13:43:54.183025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.069 [2024-10-01 13:43:54.183059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.069 [2024-10-01 13:43:54.193094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.069 [2024-10-01 13:43:54.193349] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.069 [2024-10-01 13:43:54.193390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.069 [2024-10-01 13:43:54.193411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.069 [2024-10-01 13:43:54.194403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.069 [2024-10-01 13:43:54.194685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.069 [2024-10-01 13:43:54.194724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.069 [2024-10-01 13:43:54.194744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.069 [2024-10-01 13:43:54.194868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.069 [2024-10-01 13:43:54.204214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.069 [2024-10-01 13:43:54.204416] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.069 [2024-10-01 13:43:54.204452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.069 [2024-10-01 13:43:54.204472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.069 [2024-10-01 13:43:54.204509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.069 [2024-10-01 13:43:54.204560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.069 [2024-10-01 13:43:54.204581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.069 [2024-10-01 13:43:54.204598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.069 [2024-10-01 13:43:54.204632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.069 [2024-10-01 13:43:54.214457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.069 [2024-10-01 13:43:54.214610] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.069 [2024-10-01 13:43:54.214648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.069 [2024-10-01 13:43:54.214668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.069 [2024-10-01 13:43:54.214703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.069 [2024-10-01 13:43:54.214737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.069 [2024-10-01 13:43:54.214786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.069 [2024-10-01 13:43:54.214803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.069 [2024-10-01 13:43:54.214838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.069 [2024-10-01 13:43:54.225868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.069 [2024-10-01 13:43:54.226083] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.069 [2024-10-01 13:43:54.226122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.069 [2024-10-01 13:43:54.226143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.069 [2024-10-01 13:43:54.226180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.069 [2024-10-01 13:43:54.226214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.069 [2024-10-01 13:43:54.226231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.069 [2024-10-01 13:43:54.226248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.069 [2024-10-01 13:43:54.226282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.069 [2024-10-01 13:43:54.236216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.069 [2024-10-01 13:43:54.236390] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.069 [2024-10-01 13:43:54.236426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.069 [2024-10-01 13:43:54.236446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.069 [2024-10-01 13:43:54.237387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.069 [2024-10-01 13:43:54.237637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.069 [2024-10-01 13:43:54.237675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.069 [2024-10-01 13:43:54.237694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.069 [2024-10-01 13:43:54.237777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.069 [2024-10-01 13:43:54.247170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.069 [2024-10-01 13:43:54.247304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.069 [2024-10-01 13:43:54.247339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.069 [2024-10-01 13:43:54.247357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.069 [2024-10-01 13:43:54.247391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.069 [2024-10-01 13:43:54.247424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.069 [2024-10-01 13:43:54.247442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.069 [2024-10-01 13:43:54.247456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.069 [2024-10-01 13:43:54.247488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.069 [2024-10-01 13:43:54.257508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.069 [2024-10-01 13:43:54.257755] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.069 [2024-10-01 13:43:54.257794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.069 [2024-10-01 13:43:54.257814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.069 [2024-10-01 13:43:54.257851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.070 [2024-10-01 13:43:54.257885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.070 [2024-10-01 13:43:54.257903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.070 [2024-10-01 13:43:54.257919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.070 [2024-10-01 13:43:54.257958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.070 [2024-10-01 13:43:54.269057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.070 [2024-10-01 13:43:54.269253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.070 [2024-10-01 13:43:54.269290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.070 [2024-10-01 13:43:54.269309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.070 [2024-10-01 13:43:54.269346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.070 [2024-10-01 13:43:54.269379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.070 [2024-10-01 13:43:54.269396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.070 [2024-10-01 13:43:54.269412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.070 [2024-10-01 13:43:54.269446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.070 [2024-10-01 13:43:54.279379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.070 [2024-10-01 13:43:54.279499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.070 [2024-10-01 13:43:54.279532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.070 [2024-10-01 13:43:54.279569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.070 [2024-10-01 13:43:54.279619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.070 [2024-10-01 13:43:54.280569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.070 [2024-10-01 13:43:54.280608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.070 [2024-10-01 13:43:54.280626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.070 [2024-10-01 13:43:54.280818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.070 [2024-10-01 13:43:54.290328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.070 [2024-10-01 13:43:54.290449] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.070 [2024-10-01 13:43:54.290501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.070 [2024-10-01 13:43:54.290522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.070 [2024-10-01 13:43:54.290604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.070 [2024-10-01 13:43:54.290638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.070 [2024-10-01 13:43:54.290655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.070 [2024-10-01 13:43:54.290669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.070 [2024-10-01 13:43:54.290701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.070 [2024-10-01 13:43:54.300687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.070 [2024-10-01 13:43:54.300862] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.070 [2024-10-01 13:43:54.300898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.070 [2024-10-01 13:43:54.300918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.070 [2024-10-01 13:43:54.300954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.070 [2024-10-01 13:43:54.300987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.070 [2024-10-01 13:43:54.301005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.070 [2024-10-01 13:43:54.301021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.070 [2024-10-01 13:43:54.301053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.070 [2024-10-01 13:43:54.311962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.070 [2024-10-01 13:43:54.312095] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.070 [2024-10-01 13:43:54.312129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.070 [2024-10-01 13:43:54.312148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.070 [2024-10-01 13:43:54.312182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.070 [2024-10-01 13:43:54.312215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.070 [2024-10-01 13:43:54.312233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.070 [2024-10-01 13:43:54.312247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.070 [2024-10-01 13:43:54.312280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.070 [2024-10-01 13:43:54.322315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.070 [2024-10-01 13:43:54.322451] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.070 [2024-10-01 13:43:54.322495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.070 [2024-10-01 13:43:54.322515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.070 [2024-10-01 13:43:54.322564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.070 [2024-10-01 13:43:54.322600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.070 [2024-10-01 13:43:54.322618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.070 [2024-10-01 13:43:54.322656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.070 [2024-10-01 13:43:54.323586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.070 [2024-10-01 13:43:54.333270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.070 [2024-10-01 13:43:54.333393] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.070 [2024-10-01 13:43:54.333426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.070 [2024-10-01 13:43:54.333444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.070 [2024-10-01 13:43:54.333478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.070 [2024-10-01 13:43:54.333510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.070 [2024-10-01 13:43:54.333528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.070 [2024-10-01 13:43:54.333558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.070 [2024-10-01 13:43:54.333593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.070 [2024-10-01 13:43:54.343595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.070 [2024-10-01 13:43:54.343716] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.070 [2024-10-01 13:43:54.343758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.070 [2024-10-01 13:43:54.343778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.070 [2024-10-01 13:43:54.343812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.070 [2024-10-01 13:43:54.343845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.070 [2024-10-01 13:43:54.343863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.070 [2024-10-01 13:43:54.343889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.070 [2024-10-01 13:43:54.343922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.070 [2024-10-01 13:43:54.354873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.070 [2024-10-01 13:43:54.355008] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.070 [2024-10-01 13:43:54.355041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.070 [2024-10-01 13:43:54.355059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.070 [2024-10-01 13:43:54.355094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.071 [2024-10-01 13:43:54.355127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.071 [2024-10-01 13:43:54.355144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.071 [2024-10-01 13:43:54.355158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.071 [2024-10-01 13:43:54.355190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.071 [2024-10-01 13:43:54.365235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.071 [2024-10-01 13:43:54.365379] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.071 [2024-10-01 13:43:54.365453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.071 [2024-10-01 13:43:54.365476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.071 [2024-10-01 13:43:54.366415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.071 [2024-10-01 13:43:54.366641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.071 [2024-10-01 13:43:54.366676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.071 [2024-10-01 13:43:54.366694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.071 [2024-10-01 13:43:54.366774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.071 [2024-10-01 13:43:54.376274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.071 [2024-10-01 13:43:54.376398] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.071 [2024-10-01 13:43:54.376430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.071 [2024-10-01 13:43:54.376449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.071 [2024-10-01 13:43:54.376483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.071 [2024-10-01 13:43:54.376515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.071 [2024-10-01 13:43:54.376533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.071 [2024-10-01 13:43:54.376565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.071 [2024-10-01 13:43:54.376598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.071 [2024-10-01 13:43:54.386504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.071 [2024-10-01 13:43:54.386676] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.071 [2024-10-01 13:43:54.386712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.071 [2024-10-01 13:43:54.386732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.071 [2024-10-01 13:43:54.386767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.071 [2024-10-01 13:43:54.386801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.071 [2024-10-01 13:43:54.386819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.071 [2024-10-01 13:43:54.386835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.071 [2024-10-01 13:43:54.386868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.071 [2024-10-01 13:43:54.398222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.071 [2024-10-01 13:43:54.398380] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.071 [2024-10-01 13:43:54.398423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.071 [2024-10-01 13:43:54.398443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.071 [2024-10-01 13:43:54.398479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.071 [2024-10-01 13:43:54.398577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.071 [2024-10-01 13:43:54.398605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.071 [2024-10-01 13:43:54.398620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.071 [2024-10-01 13:43:54.398655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.071 [2024-10-01 13:43:54.408517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.071 [2024-10-01 13:43:54.408685] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.071 [2024-10-01 13:43:54.408720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.071 [2024-10-01 13:43:54.408739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.071 [2024-10-01 13:43:54.409676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.071 [2024-10-01 13:43:54.409891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.071 [2024-10-01 13:43:54.409928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.071 [2024-10-01 13:43:54.409946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.071 [2024-10-01 13:43:54.410027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.071 [2024-10-01 13:43:54.419817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.071 [2024-10-01 13:43:54.419984] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.071 [2024-10-01 13:43:54.420034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.071 [2024-10-01 13:43:54.420056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.071 [2024-10-01 13:43:54.420092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.071 [2024-10-01 13:43:54.420126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.071 [2024-10-01 13:43:54.420145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.071 [2024-10-01 13:43:54.420160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.071 [2024-10-01 13:43:54.420194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.071 [2024-10-01 13:43:54.430595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.071 [2024-10-01 13:43:54.430721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.071 [2024-10-01 13:43:54.430756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.071 [2024-10-01 13:43:54.430774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.071 [2024-10-01 13:43:54.430809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.071 [2024-10-01 13:43:54.430842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.071 [2024-10-01 13:43:54.430860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.071 [2024-10-01 13:43:54.430875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.071 [2024-10-01 13:43:54.430934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.071 [2024-10-01 13:43:54.442092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.071 [2024-10-01 13:43:54.442247] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.071 [2024-10-01 13:43:54.442282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.071 [2024-10-01 13:43:54.442301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.071 [2024-10-01 13:43:54.442335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.071 [2024-10-01 13:43:54.442368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.071 [2024-10-01 13:43:54.442385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.071 [2024-10-01 13:43:54.442400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.071 [2024-10-01 13:43:54.442433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.071 [2024-10-01 13:43:54.452859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.071 [2024-10-01 13:43:54.452991] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.071 [2024-10-01 13:43:54.453025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.071 [2024-10-01 13:43:54.453044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.071 [2024-10-01 13:43:54.453079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.071 [2024-10-01 13:43:54.454010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.071 [2024-10-01 13:43:54.454050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.071 [2024-10-01 13:43:54.454068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.071 [2024-10-01 13:43:54.454267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.071 [2024-10-01 13:43:54.464039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.071 [2024-10-01 13:43:54.464168] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.071 [2024-10-01 13:43:54.464202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.071 [2024-10-01 13:43:54.464221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.071 [2024-10-01 13:43:54.464255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.071 [2024-10-01 13:43:54.464289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.071 [2024-10-01 13:43:54.464306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.071 [2024-10-01 13:43:54.464330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.072 [2024-10-01 13:43:54.464362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.072 [2024-10-01 13:43:54.474274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.072 [2024-10-01 13:43:54.474404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.072 [2024-10-01 13:43:54.474438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.072 [2024-10-01 13:43:54.474487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.072 [2024-10-01 13:43:54.474524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.072 [2024-10-01 13:43:54.474576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.072 [2024-10-01 13:43:54.474596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.072 [2024-10-01 13:43:54.474611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.072 [2024-10-01 13:43:54.474643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.072 [2024-10-01 13:43:54.486022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.072 [2024-10-01 13:43:54.486164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.072 [2024-10-01 13:43:54.486209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.072 [2024-10-01 13:43:54.486231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.072 [2024-10-01 13:43:54.486266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.072 [2024-10-01 13:43:54.486299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.072 [2024-10-01 13:43:54.486317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.072 [2024-10-01 13:43:54.486332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.072 [2024-10-01 13:43:54.486364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.072 [2024-10-01 13:43:54.497406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.072 [2024-10-01 13:43:54.497622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.072 [2024-10-01 13:43:54.497660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.072 [2024-10-01 13:43:54.497680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.072 [2024-10-01 13:43:54.497718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.072 [2024-10-01 13:43:54.497752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.072 [2024-10-01 13:43:54.497771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.072 [2024-10-01 13:43:54.497787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.072 [2024-10-01 13:43:54.497820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.072 [2024-10-01 13:43:54.509551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.072 [2024-10-01 13:43:54.510302] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.072 [2024-10-01 13:43:54.510353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.072 [2024-10-01 13:43:54.510376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.072 [2024-10-01 13:43:54.510487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.072 [2024-10-01 13:43:54.510529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.072 [2024-10-01 13:43:54.510596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.072 [2024-10-01 13:43:54.510613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.072 [2024-10-01 13:43:54.510650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.072 [2024-10-01 13:43:54.520960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.072 [2024-10-01 13:43:54.521119] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.072 [2024-10-01 13:43:54.521166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.072 [2024-10-01 13:43:54.521187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.072 [2024-10-01 13:43:54.521223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.072 [2024-10-01 13:43:54.521257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.072 [2024-10-01 13:43:54.521275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.072 [2024-10-01 13:43:54.521289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.072 [2024-10-01 13:43:54.521322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.072 [2024-10-01 13:43:54.532520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.072 [2024-10-01 13:43:54.532678] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.072 [2024-10-01 13:43:54.532721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.072 [2024-10-01 13:43:54.532743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.072 [2024-10-01 13:43:54.532779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.072 [2024-10-01 13:43:54.532812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.072 [2024-10-01 13:43:54.532830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.072 [2024-10-01 13:43:54.532844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.072 [2024-10-01 13:43:54.532877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.072 [2024-10-01 13:43:54.542897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.072 [2024-10-01 13:43:54.543028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.072 [2024-10-01 13:43:54.543062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.072 [2024-10-01 13:43:54.543081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.072 [2024-10-01 13:43:54.543115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.072 [2024-10-01 13:43:54.544091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.072 [2024-10-01 13:43:54.544136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.072 [2024-10-01 13:43:54.544154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.072 [2024-10-01 13:43:54.544370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.072 [2024-10-01 13:43:54.554048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.072 [2024-10-01 13:43:54.554185] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.072 [2024-10-01 13:43:54.554220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.072 [2024-10-01 13:43:54.554239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.072 [2024-10-01 13:43:54.554273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.072 [2024-10-01 13:43:54.554306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.072 [2024-10-01 13:43:54.554323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.072 [2024-10-01 13:43:54.554338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.072 [2024-10-01 13:43:54.554370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.072 [2024-10-01 13:43:54.564341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.072 [2024-10-01 13:43:54.564486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.072 [2024-10-01 13:43:54.564526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.072 [2024-10-01 13:43:54.564565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.072 [2024-10-01 13:43:54.564604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.072 [2024-10-01 13:43:54.564645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.072 [2024-10-01 13:43:54.564665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.072 [2024-10-01 13:43:54.564680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.072 [2024-10-01 13:43:54.564712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.072 [2024-10-01 13:43:54.575896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.072 [2024-10-01 13:43:54.576053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.072 [2024-10-01 13:43:54.576091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.072 [2024-10-01 13:43:54.576111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.072 [2024-10-01 13:43:54.576146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.072 [2024-10-01 13:43:54.576179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.072 [2024-10-01 13:43:54.576197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.073 [2024-10-01 13:43:54.576212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.073 [2024-10-01 13:43:54.576244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.073 [2024-10-01 13:43:54.586249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.073 [2024-10-01 13:43:54.586397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.073 [2024-10-01 13:43:54.586438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.073 [2024-10-01 13:43:54.586459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.073 [2024-10-01 13:43:54.587440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.073 [2024-10-01 13:43:54.587685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.073 [2024-10-01 13:43:54.587725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.073 [2024-10-01 13:43:54.587743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.073 [2024-10-01 13:43:54.587826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.073 [2024-10-01 13:43:54.597359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.073 [2024-10-01 13:43:54.597564] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.073 [2024-10-01 13:43:54.597603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.073 [2024-10-01 13:43:54.597624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.073 [2024-10-01 13:43:54.597661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.073 [2024-10-01 13:43:54.597695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.073 [2024-10-01 13:43:54.597713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.073 [2024-10-01 13:43:54.597729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.073 [2024-10-01 13:43:54.597763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.073 [2024-10-01 13:43:54.607699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.073 [2024-10-01 13:43:54.607836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.073 [2024-10-01 13:43:54.607886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.073 [2024-10-01 13:43:54.607908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.073 [2024-10-01 13:43:54.607945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.073 [2024-10-01 13:43:54.607978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.073 [2024-10-01 13:43:54.607996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.073 [2024-10-01 13:43:54.608011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.073 [2024-10-01 13:43:54.608043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.073 [2024-10-01 13:43:54.618968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.073 [2024-10-01 13:43:54.619106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.073 [2024-10-01 13:43:54.619150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.073 [2024-10-01 13:43:54.619171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.073 [2024-10-01 13:43:54.619207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.073 [2024-10-01 13:43:54.619256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.073 [2024-10-01 13:43:54.619279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.073 [2024-10-01 13:43:54.619323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.073 [2024-10-01 13:43:54.619360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.073 [2024-10-01 13:43:54.629273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.073 [2024-10-01 13:43:54.629400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.073 [2024-10-01 13:43:54.629443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.073 [2024-10-01 13:43:54.629464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.073 [2024-10-01 13:43:54.629499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.073 [2024-10-01 13:43:54.630438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.073 [2024-10-01 13:43:54.630479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.073 [2024-10-01 13:43:54.630497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.073 [2024-10-01 13:43:54.630705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.073 [2024-10-01 13:43:54.640340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.073 [2024-10-01 13:43:54.640527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.073 [2024-10-01 13:43:54.640602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.073 [2024-10-01 13:43:54.640627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.073 [2024-10-01 13:43:54.640668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.073 [2024-10-01 13:43:54.640702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.073 [2024-10-01 13:43:54.640719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.073 [2024-10-01 13:43:54.640735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.073 [2024-10-01 13:43:54.640767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.073 [2024-10-01 13:43:54.650602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.073 [2024-10-01 13:43:54.650727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.073 [2024-10-01 13:43:54.650789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.073 [2024-10-01 13:43:54.650811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.073 [2024-10-01 13:43:54.650847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.073 [2024-10-01 13:43:54.650880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.073 [2024-10-01 13:43:54.650898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.073 [2024-10-01 13:43:54.650913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.073 [2024-10-01 13:43:54.650945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.073 [2024-10-01 13:43:54.661837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.073 [2024-10-01 13:43:54.661973] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.073 [2024-10-01 13:43:54.662041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.073 [2024-10-01 13:43:54.662065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.073 7833.50 IOPS, 30.60 MiB/s [2024-10-01 13:43:54.665017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.073 [2024-10-01 13:43:54.666001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.073 [2024-10-01 13:43:54.666062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.073 [2024-10-01 13:43:54.666093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.073 [2024-10-01 13:43:54.667176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.073 [2024-10-01 13:43:54.672305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.073 [2024-10-01 13:43:54.672431] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.073 [2024-10-01 13:43:54.672465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.073 [2024-10-01 13:43:54.672484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.073 [2024-10-01 13:43:54.672518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.073 [2024-10-01 13:43:54.672568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.073 [2024-10-01 13:43:54.672589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.073 [2024-10-01 13:43:54.672603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.073 [2024-10-01 13:43:54.672650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.073 [2024-10-01 13:43:54.684074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.073 [2024-10-01 13:43:54.684263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.073 [2024-10-01 13:43:54.684309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.073 [2024-10-01 13:43:54.684331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.073 [2024-10-01 13:43:54.684367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.073 [2024-10-01 13:43:54.684401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.073 [2024-10-01 13:43:54.684419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.074 [2024-10-01 13:43:54.684433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.074 [2024-10-01 13:43:54.684467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.074 [2024-10-01 13:43:54.694479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.074 [2024-10-01 13:43:54.694641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.074 [2024-10-01 13:43:54.694684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.074 [2024-10-01 13:43:54.694705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.074 [2024-10-01 13:43:54.694739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.074 [2024-10-01 13:43:54.694798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.074 [2024-10-01 13:43:54.694818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.074 [2024-10-01 13:43:54.694833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.074 [2024-10-01 13:43:54.694866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.074 [2024-10-01 13:43:54.705854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.074 [2024-10-01 13:43:54.706003] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.074 [2024-10-01 13:43:54.706050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.074 [2024-10-01 13:43:54.706071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.074 [2024-10-01 13:43:54.706106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.074 [2024-10-01 13:43:54.706139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.074 [2024-10-01 13:43:54.706157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.074 [2024-10-01 13:43:54.706171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.074 [2024-10-01 13:43:54.706203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.074 [2024-10-01 13:43:54.716294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.074 [2024-10-01 13:43:54.716430] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.074 [2024-10-01 13:43:54.716464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.074 [2024-10-01 13:43:54.716483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.074 [2024-10-01 13:43:54.716517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.074 [2024-10-01 13:43:54.717464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.074 [2024-10-01 13:43:54.717504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.074 [2024-10-01 13:43:54.717522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.074 [2024-10-01 13:43:54.717757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.074 [2024-10-01 13:43:54.727484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.074 [2024-10-01 13:43:54.727644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.074 [2024-10-01 13:43:54.727680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.074 [2024-10-01 13:43:54.727699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.074 [2024-10-01 13:43:54.727734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.074 [2024-10-01 13:43:54.727767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.074 [2024-10-01 13:43:54.727785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.074 [2024-10-01 13:43:54.727799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.074 [2024-10-01 13:43:54.727862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.074 [2024-10-01 13:43:54.737831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.074 [2024-10-01 13:43:54.737989] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.074 [2024-10-01 13:43:54.738024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.074 [2024-10-01 13:43:54.738043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.074 [2024-10-01 13:43:54.738080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.074 [2024-10-01 13:43:54.738113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.074 [2024-10-01 13:43:54.738132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.074 [2024-10-01 13:43:54.738147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.074 [2024-10-01 13:43:54.738180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.074 [2024-10-01 13:43:54.749377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.074 [2024-10-01 13:43:54.749635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.074 [2024-10-01 13:43:54.749695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.074 [2024-10-01 13:43:54.749731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.074 [2024-10-01 13:43:54.749791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.074 [2024-10-01 13:43:54.749849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.074 [2024-10-01 13:43:54.749884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.074 [2024-10-01 13:43:54.749912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.074 [2024-10-01 13:43:54.749966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.074 [2024-10-01 13:43:54.760290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.074 [2024-10-01 13:43:54.760458] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.074 [2024-10-01 13:43:54.760495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.074 [2024-10-01 13:43:54.760515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.074 [2024-10-01 13:43:54.761463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.074 [2024-10-01 13:43:54.761728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.074 [2024-10-01 13:43:54.761768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.074 [2024-10-01 13:43:54.761787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.074 [2024-10-01 13:43:54.761870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.074 [2024-10-01 13:43:54.771382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.074 [2024-10-01 13:43:54.771523] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.074 [2024-10-01 13:43:54.771576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.074 [2024-10-01 13:43:54.771627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.074 [2024-10-01 13:43:54.771667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.074 [2024-10-01 13:43:54.771701] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.074 [2024-10-01 13:43:54.771718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.074 [2024-10-01 13:43:54.771732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.074 [2024-10-01 13:43:54.771766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.074 [2024-10-01 13:43:54.781600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.074 [2024-10-01 13:43:54.781748] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.074 [2024-10-01 13:43:54.781785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.074 [2024-10-01 13:43:54.781804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.074 [2024-10-01 13:43:54.781840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.074 [2024-10-01 13:43:54.781873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.074 [2024-10-01 13:43:54.781892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.074 [2024-10-01 13:43:54.781906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.074 [2024-10-01 13:43:54.781938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.074 [2024-10-01 13:43:54.793015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.074 [2024-10-01 13:43:54.793180] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.074 [2024-10-01 13:43:54.793215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.074 [2024-10-01 13:43:54.793234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.074 [2024-10-01 13:43:54.793269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.074 [2024-10-01 13:43:54.793302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.074 [2024-10-01 13:43:54.793320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.074 [2024-10-01 13:43:54.793335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.074 [2024-10-01 13:43:54.793367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.075 [2024-10-01 13:43:54.804261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.075 [2024-10-01 13:43:54.804393] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.075 [2024-10-01 13:43:54.804428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.075 [2024-10-01 13:43:54.804447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.075 [2024-10-01 13:43:54.804481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.075 [2024-10-01 13:43:54.804514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.075 [2024-10-01 13:43:54.804574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.075 [2024-10-01 13:43:54.804592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.075 [2024-10-01 13:43:54.805558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.075 [2024-10-01 13:43:54.815322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.075 [2024-10-01 13:43:54.815455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.075 [2024-10-01 13:43:54.815490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.075 [2024-10-01 13:43:54.815508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.075 [2024-10-01 13:43:54.815561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.075 [2024-10-01 13:43:54.815598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.075 [2024-10-01 13:43:54.815617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.075 [2024-10-01 13:43:54.815632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.075 [2024-10-01 13:43:54.815664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.075 [2024-10-01 13:43:54.825632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.075 [2024-10-01 13:43:54.825770] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.075 [2024-10-01 13:43:54.825812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.075 [2024-10-01 13:43:54.825831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.075 [2024-10-01 13:43:54.825874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.075 [2024-10-01 13:43:54.825911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.075 [2024-10-01 13:43:54.825929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.075 [2024-10-01 13:43:54.825944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.075 [2024-10-01 13:43:54.825977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.075 [2024-10-01 13:43:54.836969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.075 [2024-10-01 13:43:54.837109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.075 [2024-10-01 13:43:54.837144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.075 [2024-10-01 13:43:54.837163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.075 [2024-10-01 13:43:54.837212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.075 [2024-10-01 13:43:54.837249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.075 [2024-10-01 13:43:54.837267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.075 [2024-10-01 13:43:54.837293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.075 [2024-10-01 13:43:54.837325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.075 [2024-10-01 13:43:54.848433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.075 [2024-10-01 13:43:54.848587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.075 [2024-10-01 13:43:54.848623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.075 [2024-10-01 13:43:54.848643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.075 [2024-10-01 13:43:54.848679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.075 [2024-10-01 13:43:54.848712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.075 [2024-10-01 13:43:54.848730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.075 [2024-10-01 13:43:54.848744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.075 [2024-10-01 13:43:54.848777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.075 [2024-10-01 13:43:54.860180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.075 [2024-10-01 13:43:54.860940] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.075 [2024-10-01 13:43:54.860987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.075 [2024-10-01 13:43:54.861008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.075 [2024-10-01 13:43:54.861102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.075 [2024-10-01 13:43:54.861143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.075 [2024-10-01 13:43:54.861161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.075 [2024-10-01 13:43:54.861177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.075 [2024-10-01 13:43:54.861210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.075 [2024-10-01 13:43:54.870290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.075 [2024-10-01 13:43:54.870440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.075 [2024-10-01 13:43:54.870475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.075 [2024-10-01 13:43:54.870495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.075 [2024-10-01 13:43:54.870530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.075 [2024-10-01 13:43:54.870613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.075 [2024-10-01 13:43:54.870637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.075 [2024-10-01 13:43:54.870652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.075 [2024-10-01 13:43:54.870687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.075 [2024-10-01 13:43:54.880407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.075 [2024-10-01 13:43:54.880558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.075 [2024-10-01 13:43:54.880595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.075 [2024-10-01 13:43:54.880614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.075 [2024-10-01 13:43:54.880683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.075 [2024-10-01 13:43:54.880718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.075 [2024-10-01 13:43:54.880736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.075 [2024-10-01 13:43:54.880751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.075 [2024-10-01 13:43:54.880783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.075 [2024-10-01 13:43:54.891885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.075 [2024-10-01 13:43:54.892019] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.075 [2024-10-01 13:43:54.892082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.075 [2024-10-01 13:43:54.892111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.075 [2024-10-01 13:43:54.892150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.075 [2024-10-01 13:43:54.892184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.075 [2024-10-01 13:43:54.892202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.076 [2024-10-01 13:43:54.892216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.076 [2024-10-01 13:43:54.892250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.076 [2024-10-01 13:43:54.903121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.076 [2024-10-01 13:43:54.903308] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.076 [2024-10-01 13:43:54.903378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.076 [2024-10-01 13:43:54.903417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.076 [2024-10-01 13:43:54.903476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.076 [2024-10-01 13:43:54.903574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.076 [2024-10-01 13:43:54.903619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.076 [2024-10-01 13:43:54.903648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.076 [2024-10-01 13:43:54.905250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.076 [2024-10-01 13:43:54.913518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.076 [2024-10-01 13:43:54.913679] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.076 [2024-10-01 13:43:54.913727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.076 [2024-10-01 13:43:54.913762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.076 [2024-10-01 13:43:54.914971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.076 [2024-10-01 13:43:54.915839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.076 [2024-10-01 13:43:54.915921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.076 [2024-10-01 13:43:54.916005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.076 [2024-10-01 13:43:54.916187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.076 [2024-10-01 13:43:54.925292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.076 [2024-10-01 13:43:54.925477] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.076 [2024-10-01 13:43:54.925530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.076 [2024-10-01 13:43:54.925590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.076 [2024-10-01 13:43:54.925648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.076 [2024-10-01 13:43:54.925700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.076 [2024-10-01 13:43:54.925730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.076 [2024-10-01 13:43:54.925754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.076 [2024-10-01 13:43:54.925811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.076 [2024-10-01 13:43:54.935490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.076 [2024-10-01 13:43:54.935639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.076 [2024-10-01 13:43:54.935675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.076 [2024-10-01 13:43:54.935695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.076 [2024-10-01 13:43:54.935743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.076 [2024-10-01 13:43:54.935785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.076 [2024-10-01 13:43:54.935804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.076 [2024-10-01 13:43:54.935819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.076 [2024-10-01 13:43:54.935853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.076 [2024-10-01 13:43:54.945609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.076 [2024-10-01 13:43:54.945736] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.076 [2024-10-01 13:43:54.945771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.076 [2024-10-01 13:43:54.945789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.076 [2024-10-01 13:43:54.945823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.076 [2024-10-01 13:43:54.945856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.076 [2024-10-01 13:43:54.945875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.076 [2024-10-01 13:43:54.945889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.076 [2024-10-01 13:43:54.945921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.076 [2024-10-01 13:43:54.955708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.076 [2024-10-01 13:43:54.955901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.076 [2024-10-01 13:43:54.955937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.076 [2024-10-01 13:43:54.955956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.076 [2024-10-01 13:43:54.957279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.076 [2024-10-01 13:43:54.957507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.076 [2024-10-01 13:43:54.957566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.076 [2024-10-01 13:43:54.957588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.076 [2024-10-01 13:43:54.958382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.076 [2024-10-01 13:43:54.967259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.076 [2024-10-01 13:43:54.967401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.076 [2024-10-01 13:43:54.967450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.076 [2024-10-01 13:43:54.967476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.076 [2024-10-01 13:43:54.967513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.076 [2024-10-01 13:43:54.967563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.076 [2024-10-01 13:43:54.967594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.076 [2024-10-01 13:43:54.967620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.076 [2024-10-01 13:43:54.967657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.076 [2024-10-01 13:43:54.977379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.076 [2024-10-01 13:43:54.977515] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.076 [2024-10-01 13:43:54.977572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.076 [2024-10-01 13:43:54.977608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.076 [2024-10-01 13:43:54.978978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.076 [2024-10-01 13:43:54.980015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.076 [2024-10-01 13:43:54.980067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.076 [2024-10-01 13:43:54.980101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.076 [2024-10-01 13:43:54.980274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.076 [2024-10-01 13:43:54.987483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.076 [2024-10-01 13:43:54.987672] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.076 [2024-10-01 13:43:54.987721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.076 [2024-10-01 13:43:54.987743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.076 [2024-10-01 13:43:54.987779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.076 [2024-10-01 13:43:54.989200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.076 [2024-10-01 13:43:54.989246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.076 [2024-10-01 13:43:54.989266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.076 [2024-10-01 13:43:54.989528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.076 [2024-10-01 13:43:54.998093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.076 [2024-10-01 13:43:54.998242] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.076 [2024-10-01 13:43:54.998290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.076 [2024-10-01 13:43:54.998313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.076 [2024-10-01 13:43:54.998352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.076 [2024-10-01 13:43:54.998419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.076 [2024-10-01 13:43:54.998448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.076 [2024-10-01 13:43:54.998464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.076 [2024-10-01 13:43:54.998499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.077 [2024-10-01 13:43:55.008981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.077 [2024-10-01 13:43:55.009131] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.077 [2024-10-01 13:43:55.009170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.077 [2024-10-01 13:43:55.009203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.077 [2024-10-01 13:43:55.009249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.077 [2024-10-01 13:43:55.010754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.077 [2024-10-01 13:43:55.010810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.077 [2024-10-01 13:43:55.010842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.077 [2024-10-01 13:43:55.011880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.077 [2024-10-01 13:43:55.020815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.077 [2024-10-01 13:43:55.021141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.077 [2024-10-01 13:43:55.021203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.077 [2024-10-01 13:43:55.021243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.077 [2024-10-01 13:43:55.021404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.077 [2024-10-01 13:43:55.021486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.077 [2024-10-01 13:43:55.021521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.077 [2024-10-01 13:43:55.021570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.077 [2024-10-01 13:43:55.023303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.077 [2024-10-01 13:43:55.030946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.077 [2024-10-01 13:43:55.031080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.077 [2024-10-01 13:43:55.031115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.077 [2024-10-01 13:43:55.031134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.077 [2024-10-01 13:43:55.031169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.077 [2024-10-01 13:43:55.031202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.077 [2024-10-01 13:43:55.031220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.077 [2024-10-01 13:43:55.031235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.077 [2024-10-01 13:43:55.031268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.077 [2024-10-01 13:43:55.041475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.077 [2024-10-01 13:43:55.042859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.077 [2024-10-01 13:43:55.042911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.077 [2024-10-01 13:43:55.042933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.077 [2024-10-01 13:43:55.043847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.077 [2024-10-01 13:43:55.044143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.077 [2024-10-01 13:43:55.044182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.077 [2024-10-01 13:43:55.044201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.077 [2024-10-01 13:43:55.044240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.077 [2024-10-01 13:43:55.051757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.077 [2024-10-01 13:43:55.051911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.077 [2024-10-01 13:43:55.051947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.077 [2024-10-01 13:43:55.051967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.077 [2024-10-01 13:43:55.052003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.077 [2024-10-01 13:43:55.052037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.077 [2024-10-01 13:43:55.052055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.077 [2024-10-01 13:43:55.052080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.077 [2024-10-01 13:43:55.052134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.077 [2024-10-01 13:43:55.063115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.077 [2024-10-01 13:43:55.063260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.077 [2024-10-01 13:43:55.063295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.077 [2024-10-01 13:43:55.063338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.077 [2024-10-01 13:43:55.063382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.077 [2024-10-01 13:43:55.063436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.077 [2024-10-01 13:43:55.063458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.077 [2024-10-01 13:43:55.063473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.077 [2024-10-01 13:43:55.063525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.077 [2024-10-01 13:43:55.074915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.077 [2024-10-01 13:43:55.075067] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.077 [2024-10-01 13:43:55.075111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.077 [2024-10-01 13:43:55.075132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.077 [2024-10-01 13:43:55.075169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.077 [2024-10-01 13:43:55.075202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.077 [2024-10-01 13:43:55.075231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.077 [2024-10-01 13:43:55.075260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.077 [2024-10-01 13:43:55.075313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.077 [2024-10-01 13:43:55.086652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.077 [2024-10-01 13:43:55.086844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.077 [2024-10-01 13:43:55.086893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.077 [2024-10-01 13:43:55.086925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.077 [2024-10-01 13:43:55.087988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.077 [2024-10-01 13:43:55.088236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.077 [2024-10-01 13:43:55.088275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.077 [2024-10-01 13:43:55.088294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.077 [2024-10-01 13:43:55.088377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.077 [2024-10-01 13:43:55.096779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.077 [2024-10-01 13:43:55.096915] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.077 [2024-10-01 13:43:55.096950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.077 [2024-10-01 13:43:55.096968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.077 [2024-10-01 13:43:55.097806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.077 [2024-10-01 13:43:55.098028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.077 [2024-10-01 13:43:55.098088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.077 [2024-10-01 13:43:55.098108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.077 [2024-10-01 13:43:55.098237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.077 [2024-10-01 13:43:55.107039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.077 [2024-10-01 13:43:55.107164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.077 [2024-10-01 13:43:55.107198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.077 [2024-10-01 13:43:55.107217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.078 [2024-10-01 13:43:55.107251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.078 [2024-10-01 13:43:55.107283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.078 [2024-10-01 13:43:55.107301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.078 [2024-10-01 13:43:55.107316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.078 [2024-10-01 13:43:55.107349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.078 [2024-10-01 13:43:55.117139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.078 [2024-10-01 13:43:55.117271] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.078 [2024-10-01 13:43:55.117306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.078 [2024-10-01 13:43:55.117325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.078 [2024-10-01 13:43:55.117360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.078 [2024-10-01 13:43:55.117392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.078 [2024-10-01 13:43:55.117410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.078 [2024-10-01 13:43:55.117424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.078 [2024-10-01 13:43:55.117457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.078 [2024-10-01 13:43:55.127714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.078 [2024-10-01 13:43:55.127856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.078 [2024-10-01 13:43:55.127904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.078 [2024-10-01 13:43:55.127923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.078 [2024-10-01 13:43:55.127958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.078 [2024-10-01 13:43:55.127991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.078 [2024-10-01 13:43:55.128009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.078 [2024-10-01 13:43:55.128024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.078 [2024-10-01 13:43:55.128056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.078 [2024-10-01 13:43:55.138085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.078 [2024-10-01 13:43:55.138217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.078 [2024-10-01 13:43:55.138251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.078 [2024-10-01 13:43:55.138270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.078 [2024-10-01 13:43:55.139585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.078 [2024-10-01 13:43:55.139849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.078 [2024-10-01 13:43:55.139901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.078 [2024-10-01 13:43:55.139920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.078 [2024-10-01 13:43:55.139958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.078 [2024-10-01 13:43:55.148236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.078 [2024-10-01 13:43:55.148363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.078 [2024-10-01 13:43:55.148397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.078 [2024-10-01 13:43:55.148416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.078 [2024-10-01 13:43:55.148449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.078 [2024-10-01 13:43:55.148482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.078 [2024-10-01 13:43:55.148503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.078 [2024-10-01 13:43:55.148524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.078 [2024-10-01 13:43:55.148574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.078 [2024-10-01 13:43:55.158566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.078 [2024-10-01 13:43:55.158788] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.078 [2024-10-01 13:43:55.158827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.078 [2024-10-01 13:43:55.158848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.078 [2024-10-01 13:43:55.158887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.078 [2024-10-01 13:43:55.158922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.078 [2024-10-01 13:43:55.158939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.078 [2024-10-01 13:43:55.158955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.078 [2024-10-01 13:43:55.158988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.078 [2024-10-01 13:43:55.169746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.078 [2024-10-01 13:43:55.169898] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.078 [2024-10-01 13:43:55.169933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.078 [2024-10-01 13:43:55.169953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.078 [2024-10-01 13:43:55.170019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.078 [2024-10-01 13:43:55.170070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.078 [2024-10-01 13:43:55.170093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.078 [2024-10-01 13:43:55.170108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.078 [2024-10-01 13:43:55.170142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.078 [2024-10-01 13:43:55.179856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.078 [2024-10-01 13:43:55.180005] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.078 [2024-10-01 13:43:55.180039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.078 [2024-10-01 13:43:55.180058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.078 [2024-10-01 13:43:55.181010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.078 [2024-10-01 13:43:55.181225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.078 [2024-10-01 13:43:55.181261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.078 [2024-10-01 13:43:55.181279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.078 [2024-10-01 13:43:55.181359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.078 [2024-10-01 13:43:55.190649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.078 [2024-10-01 13:43:55.190846] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.078 [2024-10-01 13:43:55.190883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.078 [2024-10-01 13:43:55.190904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.078 [2024-10-01 13:43:55.190941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.078 [2024-10-01 13:43:55.190975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.078 [2024-10-01 13:43:55.190993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.078 [2024-10-01 13:43:55.191009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.078 [2024-10-01 13:43:55.191042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.078 [2024-10-01 13:43:55.201049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.078 [2024-10-01 13:43:55.201174] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.078 [2024-10-01 13:43:55.201208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.078 [2024-10-01 13:43:55.201227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.078 [2024-10-01 13:43:55.201261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.078 [2024-10-01 13:43:55.201294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.078 [2024-10-01 13:43:55.201320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.078 [2024-10-01 13:43:55.201368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.078 [2024-10-01 13:43:55.201404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.078 [2024-10-01 13:43:55.212250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.078 [2024-10-01 13:43:55.212471] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.078 [2024-10-01 13:43:55.212512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.078 [2024-10-01 13:43:55.212532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.078 [2024-10-01 13:43:55.212610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.078 [2024-10-01 13:43:55.212649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.079 [2024-10-01 13:43:55.212668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.079 [2024-10-01 13:43:55.212684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.079 [2024-10-01 13:43:55.212718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.079 [2024-10-01 13:43:55.222548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.079 [2024-10-01 13:43:55.222742] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.079 [2024-10-01 13:43:55.222779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.079 [2024-10-01 13:43:55.222808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.079 [2024-10-01 13:43:55.223762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.079 [2024-10-01 13:43:55.223995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.079 [2024-10-01 13:43:55.224032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.079 [2024-10-01 13:43:55.224052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.079 [2024-10-01 13:43:55.225348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.079 [2024-10-01 13:43:55.233368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.079 [2024-10-01 13:43:55.233513] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.079 [2024-10-01 13:43:55.233564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.079 [2024-10-01 13:43:55.233586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.079 [2024-10-01 13:43:55.233623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.079 [2024-10-01 13:43:55.233678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.079 [2024-10-01 13:43:55.233711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.079 [2024-10-01 13:43:55.233740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.079 [2024-10-01 13:43:55.233797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.079 [2024-10-01 13:43:55.243479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.079 [2024-10-01 13:43:55.243664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.079 [2024-10-01 13:43:55.243700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.079 [2024-10-01 13:43:55.243721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.079 [2024-10-01 13:43:55.243757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.079 [2024-10-01 13:43:55.243790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.079 [2024-10-01 13:43:55.243808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.079 [2024-10-01 13:43:55.243823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.079 [2024-10-01 13:43:55.243856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.079 [2024-10-01 13:43:55.254609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.079 [2024-10-01 13:43:55.254751] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.079 [2024-10-01 13:43:55.254786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.079 [2024-10-01 13:43:55.254805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.079 [2024-10-01 13:43:55.254856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.079 [2024-10-01 13:43:55.254893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.079 [2024-10-01 13:43:55.254912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.079 [2024-10-01 13:43:55.254926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.079 [2024-10-01 13:43:55.254958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.079 [2024-10-01 13:43:55.264797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.079 [2024-10-01 13:43:55.264930] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.079 [2024-10-01 13:43:55.264964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.079 [2024-10-01 13:43:55.264983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.079 [2024-10-01 13:43:55.265927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.079 [2024-10-01 13:43:55.266162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.079 [2024-10-01 13:43:55.266202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.079 [2024-10-01 13:43:55.266220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.079 [2024-10-01 13:43:55.266302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.079 [2024-10-01 13:43:55.275496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.079 [2024-10-01 13:43:55.275637] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.079 [2024-10-01 13:43:55.275672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.079 [2024-10-01 13:43:55.275691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.079 [2024-10-01 13:43:55.275725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.079 [2024-10-01 13:43:55.275803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.079 [2024-10-01 13:43:55.275826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.079 [2024-10-01 13:43:55.275841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.079 [2024-10-01 13:43:55.275887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.079 [2024-10-01 13:43:55.285695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.079 [2024-10-01 13:43:55.285824] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.079 [2024-10-01 13:43:55.285859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.079 [2024-10-01 13:43:55.285878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.079 [2024-10-01 13:43:55.285912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.079 [2024-10-01 13:43:55.285944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.079 [2024-10-01 13:43:55.285962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.079 [2024-10-01 13:43:55.285976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.079 [2024-10-01 13:43:55.286009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.079 [2024-10-01 13:43:55.296707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.079 [2024-10-01 13:43:55.296833] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.079 [2024-10-01 13:43:55.296867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.079 [2024-10-01 13:43:55.296886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.079 [2024-10-01 13:43:55.296922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.079 [2024-10-01 13:43:55.296983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.079 [2024-10-01 13:43:55.297007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.079 [2024-10-01 13:43:55.297022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.079 [2024-10-01 13:43:55.297055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.079 [2024-10-01 13:43:55.306864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.079 [2024-10-01 13:43:55.307063] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.079 [2024-10-01 13:43:55.307100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.079 [2024-10-01 13:43:55.307120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.079 [2024-10-01 13:43:55.308097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.079 [2024-10-01 13:43:55.308328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.080 [2024-10-01 13:43:55.308365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.080 [2024-10-01 13:43:55.308384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.080 [2024-10-01 13:43:55.308550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.080 [2024-10-01 13:43:55.317723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.080 [2024-10-01 13:43:55.317878] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.080 [2024-10-01 13:43:55.317914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.080 [2024-10-01 13:43:55.317933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.080 [2024-10-01 13:43:55.317969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.080 [2024-10-01 13:43:55.318017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.080 [2024-10-01 13:43:55.318040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.080 [2024-10-01 13:43:55.318056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.080 [2024-10-01 13:43:55.318089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.080 [2024-10-01 13:43:55.327861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.080 [2024-10-01 13:43:55.328085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.080 [2024-10-01 13:43:55.328123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.080 [2024-10-01 13:43:55.328144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.080 [2024-10-01 13:43:55.328181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.080 [2024-10-01 13:43:55.328215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.080 [2024-10-01 13:43:55.328232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.080 [2024-10-01 13:43:55.328248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.080 [2024-10-01 13:43:55.328289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.080 [2024-10-01 13:43:55.339401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.080 [2024-10-01 13:43:55.339603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.080 [2024-10-01 13:43:55.339641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.080 [2024-10-01 13:43:55.339661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.080 [2024-10-01 13:43:55.339700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.080 [2024-10-01 13:43:55.339733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.080 [2024-10-01 13:43:55.339752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.080 [2024-10-01 13:43:55.339767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.080 [2024-10-01 13:43:55.339800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.080 [2024-10-01 13:43:55.349598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.080 [2024-10-01 13:43:55.349774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.080 [2024-10-01 13:43:55.349810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.080 [2024-10-01 13:43:55.349861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.080 [2024-10-01 13:43:55.350819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.080 [2024-10-01 13:43:55.351053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.080 [2024-10-01 13:43:55.351089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.080 [2024-10-01 13:43:55.351109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.080 [2024-10-01 13:43:55.352412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.080 [2024-10-01 13:43:55.360407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.080 [2024-10-01 13:43:55.360558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.080 [2024-10-01 13:43:55.360594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.080 [2024-10-01 13:43:55.360613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.080 [2024-10-01 13:43:55.360662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.080 [2024-10-01 13:43:55.360700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.080 [2024-10-01 13:43:55.360718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.080 [2024-10-01 13:43:55.360733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.080 [2024-10-01 13:43:55.360765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.080 [2024-10-01 13:43:55.370517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.080 [2024-10-01 13:43:55.370661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.080 [2024-10-01 13:43:55.370695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.080 [2024-10-01 13:43:55.370713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.080 [2024-10-01 13:43:55.370748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.080 [2024-10-01 13:43:55.370780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.080 [2024-10-01 13:43:55.370798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.080 [2024-10-01 13:43:55.370812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.080 [2024-10-01 13:43:55.370843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.080 [2024-10-01 13:43:55.381621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.080 [2024-10-01 13:43:55.381752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.080 [2024-10-01 13:43:55.381790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.080 [2024-10-01 13:43:55.381811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.080 [2024-10-01 13:43:55.381845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.080 [2024-10-01 13:43:55.381893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.080 [2024-10-01 13:43:55.381945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.080 [2024-10-01 13:43:55.381962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.080 [2024-10-01 13:43:55.381996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.080 [2024-10-01 13:43:55.391751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.080 [2024-10-01 13:43:55.391932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.080 [2024-10-01 13:43:55.391969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.080 [2024-10-01 13:43:55.391989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.080 [2024-10-01 13:43:55.392949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.080 [2024-10-01 13:43:55.393189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.080 [2024-10-01 13:43:55.393226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.080 [2024-10-01 13:43:55.393246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.080 [2024-10-01 13:43:55.393328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.080 [2024-10-01 13:43:55.402611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.080 [2024-10-01 13:43:55.402792] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.080 [2024-10-01 13:43:55.402831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.080 [2024-10-01 13:43:55.402851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.080 [2024-10-01 13:43:55.402886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.080 [2024-10-01 13:43:55.402920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.080 [2024-10-01 13:43:55.402938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.080 [2024-10-01 13:43:55.402953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.080 [2024-10-01 13:43:55.402988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.080 [2024-10-01 13:43:55.412745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.080 [2024-10-01 13:43:55.412875] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.080 [2024-10-01 13:43:55.412908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.080 [2024-10-01 13:43:55.412927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.081 [2024-10-01 13:43:55.412960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.081 [2024-10-01 13:43:55.412993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.081 [2024-10-01 13:43:55.413011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.081 [2024-10-01 13:43:55.413026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.081 [2024-10-01 13:43:55.413058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.081 [2024-10-01 13:43:55.423746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.081 [2024-10-01 13:43:55.423933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.081 [2024-10-01 13:43:55.423975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.081 [2024-10-01 13:43:55.423996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.081 [2024-10-01 13:43:55.424035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.081 [2024-10-01 13:43:55.424069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.081 [2024-10-01 13:43:55.424087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.081 [2024-10-01 13:43:55.424102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.081 [2024-10-01 13:43:55.424136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.081 [2024-10-01 13:43:55.433934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.081 [2024-10-01 13:43:55.434105] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.081 [2024-10-01 13:43:55.434142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.081 [2024-10-01 13:43:55.434162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.081 [2024-10-01 13:43:55.435128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.081 [2024-10-01 13:43:55.435346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.081 [2024-10-01 13:43:55.435391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.081 [2024-10-01 13:43:55.435411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.081 [2024-10-01 13:43:55.435494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.081 [2024-10-01 13:43:55.444792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.081 [2024-10-01 13:43:55.444984] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.081 [2024-10-01 13:43:55.445030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.081 [2024-10-01 13:43:55.445052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.081 [2024-10-01 13:43:55.446006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.081 [2024-10-01 13:43:55.446670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.081 [2024-10-01 13:43:55.446710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.081 [2024-10-01 13:43:55.446729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.081 [2024-10-01 13:43:55.446851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.081 [2024-10-01 13:43:55.455131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.081 [2024-10-01 13:43:55.455323] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.081 [2024-10-01 13:43:55.455359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.081 [2024-10-01 13:43:55.455379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.081 [2024-10-01 13:43:55.456242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.081 [2024-10-01 13:43:55.456467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.081 [2024-10-01 13:43:55.456504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.081 [2024-10-01 13:43:55.456524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.081 [2024-10-01 13:43:55.456586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.081 [2024-10-01 13:43:55.465363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.081 [2024-10-01 13:43:55.465573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.081 [2024-10-01 13:43:55.465612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.081 [2024-10-01 13:43:55.465632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.081 [2024-10-01 13:43:55.465672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.081 [2024-10-01 13:43:55.465706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.081 [2024-10-01 13:43:55.465724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.081 [2024-10-01 13:43:55.465741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.081 [2024-10-01 13:43:55.465774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.081 [2024-10-01 13:43:55.475619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.081 [2024-10-01 13:43:55.475765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.081 [2024-10-01 13:43:55.475800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.081 [2024-10-01 13:43:55.475820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.081 [2024-10-01 13:43:55.475856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.081 [2024-10-01 13:43:55.475905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.081 [2024-10-01 13:43:55.475924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.081 [2024-10-01 13:43:55.475938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.081 [2024-10-01 13:43:55.475970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.081 [2024-10-01 13:43:55.486814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.081 [2024-10-01 13:43:55.487003] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.081 [2024-10-01 13:43:55.487039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.081 [2024-10-01 13:43:55.487059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.081 [2024-10-01 13:43:55.487095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.081 [2024-10-01 13:43:55.487129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.081 [2024-10-01 13:43:55.487148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.081 [2024-10-01 13:43:55.487199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.081 [2024-10-01 13:43:55.487246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.081 [2024-10-01 13:43:55.497131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.081 [2024-10-01 13:43:55.497333] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.081 [2024-10-01 13:43:55.497369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.081 [2024-10-01 13:43:55.497389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.081 [2024-10-01 13:43:55.498353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.081 [2024-10-01 13:43:55.498598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.081 [2024-10-01 13:43:55.498635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.081 [2024-10-01 13:43:55.498654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.081 [2024-10-01 13:43:55.498737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.081 [2024-10-01 13:43:55.508309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.081 [2024-10-01 13:43:55.508516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.081 [2024-10-01 13:43:55.508582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.081 [2024-10-01 13:43:55.508606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.081 [2024-10-01 13:43:55.508645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.081 [2024-10-01 13:43:55.508681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.081 [2024-10-01 13:43:55.508699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.081 [2024-10-01 13:43:55.508715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.082 [2024-10-01 13:43:55.508749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.082 [2024-10-01 13:43:55.518714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.082 [2024-10-01 13:43:55.518852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.082 [2024-10-01 13:43:55.518887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.082 [2024-10-01 13:43:55.518907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.082 [2024-10-01 13:43:55.518949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.082 [2024-10-01 13:43:55.518984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.082 [2024-10-01 13:43:55.519002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.082 [2024-10-01 13:43:55.519016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.082 [2024-10-01 13:43:55.519048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.082 [2024-10-01 13:43:55.530127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.082 [2024-10-01 13:43:55.530319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.082 [2024-10-01 13:43:55.530356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.082 [2024-10-01 13:43:55.530376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.082 [2024-10-01 13:43:55.530411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.082 [2024-10-01 13:43:55.530444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.082 [2024-10-01 13:43:55.530462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.082 [2024-10-01 13:43:55.530477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.082 [2024-10-01 13:43:55.530510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.082 [2024-10-01 13:43:55.541884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.082 [2024-10-01 13:43:55.543516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.082 [2024-10-01 13:43:55.543586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.082 [2024-10-01 13:43:55.543609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.082 [2024-10-01 13:43:55.543789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.082 [2024-10-01 13:43:55.543842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.082 [2024-10-01 13:43:55.543863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.082 [2024-10-01 13:43:55.543895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.082 [2024-10-01 13:43:55.543932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.082 [2024-10-01 13:43:55.552097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.082 [2024-10-01 13:43:55.552229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.082 [2024-10-01 13:43:55.552263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.082 [2024-10-01 13:43:55.552283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.082 [2024-10-01 13:43:55.552317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.082 [2024-10-01 13:43:55.552350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.082 [2024-10-01 13:43:55.552368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.082 [2024-10-01 13:43:55.552383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.082 [2024-10-01 13:43:55.552425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.082 [2024-10-01 13:43:55.562595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.082 [2024-10-01 13:43:55.562736] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.082 [2024-10-01 13:43:55.562788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.082 [2024-10-01 13:43:55.562810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.082 [2024-10-01 13:43:55.562871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.082 [2024-10-01 13:43:55.562907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.082 [2024-10-01 13:43:55.562925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.082 [2024-10-01 13:43:55.562940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.082 [2024-10-01 13:43:55.562984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.082 [2024-10-01 13:43:55.574027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.082 [2024-10-01 13:43:55.574165] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.082 [2024-10-01 13:43:55.574200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.082 [2024-10-01 13:43:55.574218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.082 [2024-10-01 13:43:55.574253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.082 [2024-10-01 13:43:55.574286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.082 [2024-10-01 13:43:55.574305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.082 [2024-10-01 13:43:55.574319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.082 [2024-10-01 13:43:55.574352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.082 [2024-10-01 13:43:55.584410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.082 [2024-10-01 13:43:55.584553] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.082 [2024-10-01 13:43:55.584588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.082 [2024-10-01 13:43:55.584607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.082 [2024-10-01 13:43:55.584642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.082 [2024-10-01 13:43:55.584675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.082 [2024-10-01 13:43:55.584693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.082 [2024-10-01 13:43:55.584707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.082 [2024-10-01 13:43:55.584740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.082 [2024-10-01 13:43:55.595760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.082 [2024-10-01 13:43:55.595943] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.082 [2024-10-01 13:43:55.595980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.082 [2024-10-01 13:43:55.596001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.082 [2024-10-01 13:43:55.596037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.082 [2024-10-01 13:43:55.596071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.082 [2024-10-01 13:43:55.596089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.082 [2024-10-01 13:43:55.596104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.082 [2024-10-01 13:43:55.596176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.082 [2024-10-01 13:43:55.606621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.082 [2024-10-01 13:43:55.606786] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.082 [2024-10-01 13:43:55.606822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.082 [2024-10-01 13:43:55.606842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.082 [2024-10-01 13:43:55.606880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.082 [2024-10-01 13:43:55.606924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.082 [2024-10-01 13:43:55.606943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.082 [2024-10-01 13:43:55.606959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.082 [2024-10-01 13:43:55.606992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.082 [2024-10-01 13:43:55.618286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.082 [2024-10-01 13:43:55.618429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.082 [2024-10-01 13:43:55.618471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.082 [2024-10-01 13:43:55.618503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.083 [2024-10-01 13:43:55.618557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.083 [2024-10-01 13:43:55.618594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.083 [2024-10-01 13:43:55.618612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.083 [2024-10-01 13:43:55.618626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.083 [2024-10-01 13:43:55.618659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.083 [2024-10-01 13:43:55.628950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.083 [2024-10-01 13:43:55.629078] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.083 [2024-10-01 13:43:55.629118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.083 [2024-10-01 13:43:55.629139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.083 [2024-10-01 13:43:55.629174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.083 [2024-10-01 13:43:55.630113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.083 [2024-10-01 13:43:55.630155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.083 [2024-10-01 13:43:55.630179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.083 [2024-10-01 13:43:55.630380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.083 [2024-10-01 13:43:55.640169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.083 [2024-10-01 13:43:55.640895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.083 [2024-10-01 13:43:55.640963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.083 [2024-10-01 13:43:55.640987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.083 [2024-10-01 13:43:55.641077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.083 [2024-10-01 13:43:55.641141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.083 [2024-10-01 13:43:55.641166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.083 [2024-10-01 13:43:55.641181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.083 [2024-10-01 13:43:55.641215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.083 [2024-10-01 13:43:55.651220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.083 [2024-10-01 13:43:55.651371] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.083 [2024-10-01 13:43:55.651405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.083 [2024-10-01 13:43:55.651425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.083 [2024-10-01 13:43:55.651460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.083 [2024-10-01 13:43:55.651503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.083 [2024-10-01 13:43:55.651523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.083 [2024-10-01 13:43:55.651553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.083 [2024-10-01 13:43:55.651591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.083 [2024-10-01 13:43:55.664581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.083 8038.33 IOPS, 31.40 MiB/s [2024-10-01 13:43:55.666010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.083 [2024-10-01 13:43:55.666062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.083 [2024-10-01 13:43:55.666086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.083 [2024-10-01 13:43:55.666961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.083 [2024-10-01 13:43:55.667105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.083 [2024-10-01 13:43:55.667143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.083 [2024-10-01 13:43:55.667162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.083 [2024-10-01 13:43:55.667201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.083 [2024-10-01 13:43:55.675194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.083 [2024-10-01 13:43:55.675318] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.083 [2024-10-01 13:43:55.675352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.083 [2024-10-01 13:43:55.675371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.083 [2024-10-01 13:43:55.675405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.083 [2024-10-01 13:43:55.676706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.083 [2024-10-01 13:43:55.676747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.083 [2024-10-01 13:43:55.676766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.083 [2024-10-01 13:43:55.677658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.083 [2024-10-01 13:43:55.685295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.083 [2024-10-01 13:43:55.685423] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.083 [2024-10-01 13:43:55.685458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.083 [2024-10-01 13:43:55.685485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.083 [2024-10-01 13:43:55.685521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.083 [2024-10-01 13:43:55.685572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.083 [2024-10-01 13:43:55.685592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.083 [2024-10-01 13:43:55.685607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.083 [2024-10-01 13:43:55.685869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.083 [2024-10-01 13:43:55.696075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.083 [2024-10-01 13:43:55.696205] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.083 [2024-10-01 13:43:55.696249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.083 [2024-10-01 13:43:55.696270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.083 [2024-10-01 13:43:55.696311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.083 [2024-10-01 13:43:55.696344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.083 [2024-10-01 13:43:55.696361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.083 [2024-10-01 13:43:55.696376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.083 [2024-10-01 13:43:55.696441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.083 [2024-10-01 13:43:55.707937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.083 [2024-10-01 13:43:55.708133] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.083 [2024-10-01 13:43:55.708170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.083 [2024-10-01 13:43:55.708189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.083 [2024-10-01 13:43:55.708228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.083 [2024-10-01 13:43:55.708262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.083 [2024-10-01 13:43:55.708280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.083 [2024-10-01 13:43:55.708296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.083 [2024-10-01 13:43:55.708330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.083 [2024-10-01 13:43:55.718362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.083 [2024-10-01 13:43:55.718594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.083 [2024-10-01 13:43:55.718634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.083 [2024-10-01 13:43:55.718654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.083 [2024-10-01 13:43:55.719642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.083 [2024-10-01 13:43:55.719949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.083 [2024-10-01 13:43:55.719991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.083 [2024-10-01 13:43:55.720011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.083 [2024-10-01 13:43:55.720154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.083 [2024-10-01 13:43:55.729564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.083 [2024-10-01 13:43:55.729756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.083 [2024-10-01 13:43:55.729794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.083 [2024-10-01 13:43:55.729813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.084 [2024-10-01 13:43:55.729849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.084 [2024-10-01 13:43:55.729891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.084 [2024-10-01 13:43:55.729911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.084 [2024-10-01 13:43:55.729926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.084 [2024-10-01 13:43:55.729960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.084 [2024-10-01 13:43:55.739938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.084 [2024-10-01 13:43:55.740070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.084 [2024-10-01 13:43:55.740103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.084 [2024-10-01 13:43:55.740122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.084 [2024-10-01 13:43:55.740167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.084 [2024-10-01 13:43:55.740201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.084 [2024-10-01 13:43:55.740228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.084 [2024-10-01 13:43:55.740243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.084 [2024-10-01 13:43:55.740277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.084 [2024-10-01 13:43:55.751111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.084 [2024-10-01 13:43:55.751247] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.084 [2024-10-01 13:43:55.751282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.084 [2024-10-01 13:43:55.751330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.084 [2024-10-01 13:43:55.751369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.084 [2024-10-01 13:43:55.751403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.084 [2024-10-01 13:43:55.751427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.084 [2024-10-01 13:43:55.751442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.084 [2024-10-01 13:43:55.751475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.084 [2024-10-01 13:43:55.761331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.084 [2024-10-01 13:43:55.761466] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.084 [2024-10-01 13:43:55.761502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.084 [2024-10-01 13:43:55.761521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.084 [2024-10-01 13:43:55.762472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.084 [2024-10-01 13:43:55.762748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.084 [2024-10-01 13:43:55.762788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.084 [2024-10-01 13:43:55.762807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.084 [2024-10-01 13:43:55.762898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.084 [2024-10-01 13:43:55.772141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.084 [2024-10-01 13:43:55.772274] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.084 [2024-10-01 13:43:55.772309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.084 [2024-10-01 13:43:55.772327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.084 [2024-10-01 13:43:55.772361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.084 [2024-10-01 13:43:55.772393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.084 [2024-10-01 13:43:55.772412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.084 [2024-10-01 13:43:55.772426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.084 [2024-10-01 13:43:55.772459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.084 [2024-10-01 13:43:55.782244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.084 [2024-10-01 13:43:55.782379] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.084 [2024-10-01 13:43:55.782414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.084 [2024-10-01 13:43:55.782433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.084 [2024-10-01 13:43:55.782468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.084 [2024-10-01 13:43:55.782501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.084 [2024-10-01 13:43:55.782574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.084 [2024-10-01 13:43:55.782592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.084 [2024-10-01 13:43:55.782627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.084 [2024-10-01 13:43:55.793309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.084 [2024-10-01 13:43:55.793502] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.084 [2024-10-01 13:43:55.793555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.084 [2024-10-01 13:43:55.793578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.084 [2024-10-01 13:43:55.793615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.084 [2024-10-01 13:43:55.793649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.084 [2024-10-01 13:43:55.793666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.084 [2024-10-01 13:43:55.793682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.084 [2024-10-01 13:43:55.793715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.084 [2024-10-01 13:43:55.803449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.084 [2024-10-01 13:43:55.804528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.084 [2024-10-01 13:43:55.804593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.084 [2024-10-01 13:43:55.804617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.084 [2024-10-01 13:43:55.804813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.084 [2024-10-01 13:43:55.804915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.084 [2024-10-01 13:43:55.804938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.084 [2024-10-01 13:43:55.804953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.084 [2024-10-01 13:43:55.806202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.084 [2024-10-01 13:43:55.814111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.084 [2024-10-01 13:43:55.814234] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.084 [2024-10-01 13:43:55.814268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.084 [2024-10-01 13:43:55.814288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.084 [2024-10-01 13:43:55.814322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.084 [2024-10-01 13:43:55.814372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.084 [2024-10-01 13:43:55.814402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.084 [2024-10-01 13:43:55.814418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.084 [2024-10-01 13:43:55.814460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.084 [2024-10-01 13:43:55.824220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.084 [2024-10-01 13:43:55.824429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.085 [2024-10-01 13:43:55.824467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.085 [2024-10-01 13:43:55.824488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.085 [2024-10-01 13:43:55.824524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.085 [2024-10-01 13:43:55.824577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.085 [2024-10-01 13:43:55.824596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.085 [2024-10-01 13:43:55.824611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.085 [2024-10-01 13:43:55.824645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.085 [2024-10-01 13:43:55.835238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.085 [2024-10-01 13:43:55.835443] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.085 [2024-10-01 13:43:55.835481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.085 [2024-10-01 13:43:55.835501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.085 [2024-10-01 13:43:55.835552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.085 [2024-10-01 13:43:55.835609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.085 [2024-10-01 13:43:55.835632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.085 [2024-10-01 13:43:55.835648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.085 [2024-10-01 13:43:55.835682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.085 [2024-10-01 13:43:55.845418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.085 [2024-10-01 13:43:55.845597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.085 [2024-10-01 13:43:55.845640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.085 [2024-10-01 13:43:55.845660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.085 [2024-10-01 13:43:55.846606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.085 [2024-10-01 13:43:55.846823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.085 [2024-10-01 13:43:55.846866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.085 [2024-10-01 13:43:55.846884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.085 [2024-10-01 13:43:55.846967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.085 [2024-10-01 13:43:55.856137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.085 [2024-10-01 13:43:55.856274] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.085 [2024-10-01 13:43:55.856308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.085 [2024-10-01 13:43:55.856327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.085 [2024-10-01 13:43:55.856394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.085 [2024-10-01 13:43:55.856429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.085 [2024-10-01 13:43:55.856447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.085 [2024-10-01 13:43:55.856462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.085 [2024-10-01 13:43:55.856494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.085 [2024-10-01 13:43:55.866254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.085 [2024-10-01 13:43:55.866396] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.085 [2024-10-01 13:43:55.866430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.085 [2024-10-01 13:43:55.866449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.085 [2024-10-01 13:43:55.866483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.085 [2024-10-01 13:43:55.866517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.085 [2024-10-01 13:43:55.866553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.085 [2024-10-01 13:43:55.866571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.085 [2024-10-01 13:43:55.866605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.085 [2024-10-01 13:43:55.877964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.085 [2024-10-01 13:43:55.878122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.085 [2024-10-01 13:43:55.878160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.085 [2024-10-01 13:43:55.878180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.085 [2024-10-01 13:43:55.878215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.085 [2024-10-01 13:43:55.878249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.085 [2024-10-01 13:43:55.878267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.085 [2024-10-01 13:43:55.878282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.085 [2024-10-01 13:43:55.878315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.085 [2024-10-01 13:43:55.888499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.085 [2024-10-01 13:43:55.888726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.085 [2024-10-01 13:43:55.888770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.085 [2024-10-01 13:43:55.888791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.085 [2024-10-01 13:43:55.889739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.085 [2024-10-01 13:43:55.889964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.085 [2024-10-01 13:43:55.890001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.085 [2024-10-01 13:43:55.890050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.085 [2024-10-01 13:43:55.890136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.085 [2024-10-01 13:43:55.899381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.085 [2024-10-01 13:43:55.899508] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.085 [2024-10-01 13:43:55.899564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.085 [2024-10-01 13:43:55.899586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.085 [2024-10-01 13:43:55.899624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.085 [2024-10-01 13:43:55.899677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.085 [2024-10-01 13:43:55.899711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.085 [2024-10-01 13:43:55.899738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.085 [2024-10-01 13:43:55.899794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.085 [2024-10-01 13:43:55.909677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.085 [2024-10-01 13:43:55.909816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.085 [2024-10-01 13:43:55.909857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.085 [2024-10-01 13:43:55.909878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.085 [2024-10-01 13:43:55.909913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.085 [2024-10-01 13:43:55.909946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.085 [2024-10-01 13:43:55.909963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.085 [2024-10-01 13:43:55.909978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.085 [2024-10-01 13:43:55.910010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.085 [2024-10-01 13:43:55.920125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.085 [2024-10-01 13:43:55.921007] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.085 [2024-10-01 13:43:55.921064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.085 [2024-10-01 13:43:55.921087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.085 [2024-10-01 13:43:55.921271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.085 [2024-10-01 13:43:55.921321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.085 [2024-10-01 13:43:55.921341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.085 [2024-10-01 13:43:55.921356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.085 [2024-10-01 13:43:55.921390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.086 [2024-10-01 13:43:55.931733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.086 [2024-10-01 13:43:55.931946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.086 [2024-10-01 13:43:55.932027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.086 [2024-10-01 13:43:55.932052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.086 [2024-10-01 13:43:55.933026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.086 [2024-10-01 13:43:55.933262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.086 [2024-10-01 13:43:55.933300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.086 [2024-10-01 13:43:55.933320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.086 [2024-10-01 13:43:55.934632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.086 [2024-10-01 13:43:55.942677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.086 [2024-10-01 13:43:55.942816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.086 [2024-10-01 13:43:55.942851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.086 [2024-10-01 13:43:55.942870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.086 [2024-10-01 13:43:55.942904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.086 [2024-10-01 13:43:55.942937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.086 [2024-10-01 13:43:55.942955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.086 [2024-10-01 13:43:55.942969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.086 [2024-10-01 13:43:55.943002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.086 [2024-10-01 13:43:55.952928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.086 [2024-10-01 13:43:55.953070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.086 [2024-10-01 13:43:55.953103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.086 [2024-10-01 13:43:55.953122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.086 [2024-10-01 13:43:55.953156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.086 [2024-10-01 13:43:55.953189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.086 [2024-10-01 13:43:55.953207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.086 [2024-10-01 13:43:55.953221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.086 [2024-10-01 13:43:55.953253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.086 [2024-10-01 13:43:55.964110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.086 [2024-10-01 13:43:55.964240] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.086 [2024-10-01 13:43:55.964273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.086 [2024-10-01 13:43:55.964291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.086 [2024-10-01 13:43:55.964325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.086 [2024-10-01 13:43:55.964382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.086 [2024-10-01 13:43:55.964401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.086 [2024-10-01 13:43:55.964416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.086 [2024-10-01 13:43:55.964448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.086 [2024-10-01 13:43:55.974393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.086 [2024-10-01 13:43:55.974518] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.086 [2024-10-01 13:43:55.974565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.086 [2024-10-01 13:43:55.974586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.086 [2024-10-01 13:43:55.974634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.086 [2024-10-01 13:43:55.975568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.086 [2024-10-01 13:43:55.975607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.086 [2024-10-01 13:43:55.975625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.086 [2024-10-01 13:43:55.975826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.086 [2024-10-01 13:43:55.985277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.086 [2024-10-01 13:43:55.985398] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.086 [2024-10-01 13:43:55.985431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.086 [2024-10-01 13:43:55.985449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.086 [2024-10-01 13:43:55.985482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.086 [2024-10-01 13:43:55.985514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.086 [2024-10-01 13:43:55.985532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.086 [2024-10-01 13:43:55.985564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.086 [2024-10-01 13:43:55.985597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.086 [2024-10-01 13:43:55.995412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.086 [2024-10-01 13:43:55.995552] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.086 [2024-10-01 13:43:55.995586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.086 [2024-10-01 13:43:55.995604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.086 [2024-10-01 13:43:55.995639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.086 [2024-10-01 13:43:55.995672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.086 [2024-10-01 13:43:55.995689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.086 [2024-10-01 13:43:55.995703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.086 [2024-10-01 13:43:55.995759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.086 [2024-10-01 13:43:56.006561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.086 [2024-10-01 13:43:56.006688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.086 [2024-10-01 13:43:56.006721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.086 [2024-10-01 13:43:56.006740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.086 [2024-10-01 13:43:56.006775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.086 [2024-10-01 13:43:56.006807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.086 [2024-10-01 13:43:56.006825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.086 [2024-10-01 13:43:56.006840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.086 [2024-10-01 13:43:56.006872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.086 [2024-10-01 13:43:56.016739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.086 [2024-10-01 13:43:56.016861] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.086 [2024-10-01 13:43:56.016894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.086 [2024-10-01 13:43:56.016912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.086 [2024-10-01 13:43:56.016960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.086 [2024-10-01 13:43:56.017907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.086 [2024-10-01 13:43:56.017946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.086 [2024-10-01 13:43:56.017966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.086 [2024-10-01 13:43:56.018155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.086 [2024-10-01 13:43:56.027606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.086 [2024-10-01 13:43:56.027744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.086 [2024-10-01 13:43:56.027779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.086 [2024-10-01 13:43:56.027798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.086 [2024-10-01 13:43:56.027833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.086 [2024-10-01 13:43:56.027866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.086 [2024-10-01 13:43:56.027897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.086 [2024-10-01 13:43:56.027912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.086 [2024-10-01 13:43:56.027945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.086 [2024-10-01 13:43:56.038055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.086 [2024-10-01 13:43:56.038200] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.087 [2024-10-01 13:43:56.038235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.087 [2024-10-01 13:43:56.038297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.087 [2024-10-01 13:43:56.038336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.087 [2024-10-01 13:43:56.038369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.087 [2024-10-01 13:43:56.038387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.087 [2024-10-01 13:43:56.038402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.087 [2024-10-01 13:43:56.038435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.087 [2024-10-01 13:43:56.049550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.087 [2024-10-01 13:43:56.049702] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.087 [2024-10-01 13:43:56.049746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.087 [2024-10-01 13:43:56.049772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.087 [2024-10-01 13:43:56.049810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.087 [2024-10-01 13:43:56.049844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.087 [2024-10-01 13:43:56.049863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.087 [2024-10-01 13:43:56.049878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.087 [2024-10-01 13:43:56.049911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.087 [2024-10-01 13:43:56.060356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.087 [2024-10-01 13:43:56.060486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.087 [2024-10-01 13:43:56.060520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.087 [2024-10-01 13:43:56.060555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.087 [2024-10-01 13:43:56.060594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.087 [2024-10-01 13:43:56.060627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.087 [2024-10-01 13:43:56.060645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.087 [2024-10-01 13:43:56.060660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.087 [2024-10-01 13:43:56.060693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.087 [2024-10-01 13:43:56.071849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.087 [2024-10-01 13:43:56.072080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.087 [2024-10-01 13:43:56.072119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.087 [2024-10-01 13:43:56.072140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.087 [2024-10-01 13:43:56.072178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.087 [2024-10-01 13:43:56.072213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.087 [2024-10-01 13:43:56.072262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.087 [2024-10-01 13:43:56.072280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.087 [2024-10-01 13:43:56.072315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.087 [2024-10-01 13:43:56.082269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.087 [2024-10-01 13:43:56.082438] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.087 [2024-10-01 13:43:56.082475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.087 [2024-10-01 13:43:56.082495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.087 [2024-10-01 13:43:56.082532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.087 [2024-10-01 13:43:56.082586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.087 [2024-10-01 13:43:56.082604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.087 [2024-10-01 13:43:56.082619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.087 [2024-10-01 13:43:56.082652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.087 [2024-10-01 13:43:56.093339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.087 [2024-10-01 13:43:56.093473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.087 [2024-10-01 13:43:56.093514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.087 [2024-10-01 13:43:56.093548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.087 [2024-10-01 13:43:56.093587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.087 [2024-10-01 13:43:56.093636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.087 [2024-10-01 13:43:56.093659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.087 [2024-10-01 13:43:56.093674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.087 [2024-10-01 13:43:56.093707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.087 [2024-10-01 13:43:56.103475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.087 [2024-10-01 13:43:56.103617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.087 [2024-10-01 13:43:56.103652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.087 [2024-10-01 13:43:56.103671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.087 [2024-10-01 13:43:56.104628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.087 [2024-10-01 13:43:56.104848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.087 [2024-10-01 13:43:56.104884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.087 [2024-10-01 13:43:56.104902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.087 [2024-10-01 13:43:56.104981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.087 [2024-10-01 13:43:56.114308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.087 [2024-10-01 13:43:56.114495] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.087 [2024-10-01 13:43:56.114531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.087 [2024-10-01 13:43:56.114567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.087 [2024-10-01 13:43:56.114603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.087 [2024-10-01 13:43:56.114637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.087 [2024-10-01 13:43:56.114654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.087 [2024-10-01 13:43:56.114669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.087 [2024-10-01 13:43:56.114701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.087 [2024-10-01 13:43:56.124522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.087 [2024-10-01 13:43:56.124662] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.087 [2024-10-01 13:43:56.124696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.087 [2024-10-01 13:43:56.124714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.087 [2024-10-01 13:43:56.124748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.087 [2024-10-01 13:43:56.124780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.087 [2024-10-01 13:43:56.124798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.087 [2024-10-01 13:43:56.124812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.087 [2024-10-01 13:43:56.124844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.087 [2024-10-01 13:43:56.135628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.087 [2024-10-01 13:43:56.135760] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.088 [2024-10-01 13:43:56.135794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.088 [2024-10-01 13:43:56.135813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.088 [2024-10-01 13:43:56.135847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.088 [2024-10-01 13:43:56.135910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.088 [2024-10-01 13:43:56.135934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.088 [2024-10-01 13:43:56.135949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.088 [2024-10-01 13:43:56.135983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.088 [2024-10-01 13:43:56.145776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.088 [2024-10-01 13:43:56.145903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.088 [2024-10-01 13:43:56.145937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.088 [2024-10-01 13:43:56.145955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.088 [2024-10-01 13:43:56.146923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.088 [2024-10-01 13:43:56.147153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.088 [2024-10-01 13:43:56.147189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.088 [2024-10-01 13:43:56.147207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.088 [2024-10-01 13:43:56.147287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.088 [2024-10-01 13:43:56.156582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.088 [2024-10-01 13:43:56.156706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.088 [2024-10-01 13:43:56.156742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.088 [2024-10-01 13:43:56.156761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.088 [2024-10-01 13:43:56.156808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.088 [2024-10-01 13:43:56.156846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.088 [2024-10-01 13:43:56.156865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.088 [2024-10-01 13:43:56.156879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.088 [2024-10-01 13:43:56.156911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.088 [2024-10-01 13:43:56.166684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.088 [2024-10-01 13:43:56.166812] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.088 [2024-10-01 13:43:56.166846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.088 [2024-10-01 13:43:56.166865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.088 [2024-10-01 13:43:56.166898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.088 [2024-10-01 13:43:56.166931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.088 [2024-10-01 13:43:56.166950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.088 [2024-10-01 13:43:56.166965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.088 [2024-10-01 13:43:56.166996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.088 [2024-10-01 13:43:56.177888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.088 [2024-10-01 13:43:56.178094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.088 [2024-10-01 13:43:56.178131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.088 [2024-10-01 13:43:56.178152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.088 [2024-10-01 13:43:56.178204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.088 [2024-10-01 13:43:56.178242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.088 [2024-10-01 13:43:56.178261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.088 [2024-10-01 13:43:56.178310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.088 [2024-10-01 13:43:56.178346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.088 [2024-10-01 13:43:56.188697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.088 [2024-10-01 13:43:56.188830] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.088 [2024-10-01 13:43:56.188869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.088 [2024-10-01 13:43:56.188888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.088 [2024-10-01 13:43:56.188922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.088 [2024-10-01 13:43:56.189866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.088 [2024-10-01 13:43:56.189908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.088 [2024-10-01 13:43:56.189926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.088 [2024-10-01 13:43:56.190142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.088 [2024-10-01 13:43:56.199768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.088 [2024-10-01 13:43:56.199970] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.088 [2024-10-01 13:43:56.200009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.088 [2024-10-01 13:43:56.200029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.088 [2024-10-01 13:43:56.200066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.088 [2024-10-01 13:43:56.200100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.088 [2024-10-01 13:43:56.200119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.088 [2024-10-01 13:43:56.200134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.088 [2024-10-01 13:43:56.200168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.088 [2024-10-01 13:43:56.209932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.088 [2024-10-01 13:43:56.210063] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.088 [2024-10-01 13:43:56.210096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.088 [2024-10-01 13:43:56.210115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.088 [2024-10-01 13:43:56.210149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.088 [2024-10-01 13:43:56.210181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.088 [2024-10-01 13:43:56.210198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.088 [2024-10-01 13:43:56.210212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.088 [2024-10-01 13:43:56.210244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.088 [2024-10-01 13:43:56.221110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.088 [2024-10-01 13:43:56.221245] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.088 [2024-10-01 13:43:56.221308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.088 [2024-10-01 13:43:56.221329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.088 [2024-10-01 13:43:56.221364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.088 [2024-10-01 13:43:56.221398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.088 [2024-10-01 13:43:56.221416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.088 [2024-10-01 13:43:56.221430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.088 [2024-10-01 13:43:56.221463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.088 [2024-10-01 13:43:56.232133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.088 [2024-10-01 13:43:56.232264] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.088 [2024-10-01 13:43:56.232298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.088 [2024-10-01 13:43:56.232317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.088 [2024-10-01 13:43:56.232350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.088 [2024-10-01 13:43:56.232383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.088 [2024-10-01 13:43:56.232401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.088 [2024-10-01 13:43:56.232416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.088 [2024-10-01 13:43:56.232447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.088 [2024-10-01 13:43:56.243218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.088 [2024-10-01 13:43:56.243345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.088 [2024-10-01 13:43:56.243380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.088 [2024-10-01 13:43:56.243398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.089 [2024-10-01 13:43:56.243432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.089 [2024-10-01 13:43:56.243464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.089 [2024-10-01 13:43:56.243482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.089 [2024-10-01 13:43:56.243496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.089 [2024-10-01 13:43:56.243529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.089 [2024-10-01 13:43:56.253352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.089 [2024-10-01 13:43:56.253495] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.089 [2024-10-01 13:43:56.253552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.089 [2024-10-01 13:43:56.253576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.089 [2024-10-01 13:43:56.253612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.089 [2024-10-01 13:43:56.253679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.089 [2024-10-01 13:43:56.253698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.089 [2024-10-01 13:43:56.253719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.089 [2024-10-01 13:43:56.253752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.089 [2024-10-01 13:43:56.264557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.089 [2024-10-01 13:43:56.264710] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.089 [2024-10-01 13:43:56.264744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.089 [2024-10-01 13:43:56.264763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.089 [2024-10-01 13:43:56.264799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.089 [2024-10-01 13:43:56.264832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.089 [2024-10-01 13:43:56.264851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.089 [2024-10-01 13:43:56.264866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.089 [2024-10-01 13:43:56.264898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.089 [2024-10-01 13:43:56.275219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.089 [2024-10-01 13:43:56.275353] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.089 [2024-10-01 13:43:56.275392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.089 [2024-10-01 13:43:56.275411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.089 [2024-10-01 13:43:56.275446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.089 [2024-10-01 13:43:56.275479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.089 [2024-10-01 13:43:56.275496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.089 [2024-10-01 13:43:56.275510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.089 [2024-10-01 13:43:56.275560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.089 [2024-10-01 13:43:56.286804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.089 [2024-10-01 13:43:56.286961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.089 [2024-10-01 13:43:56.286998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.089 [2024-10-01 13:43:56.287018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.089 [2024-10-01 13:43:56.287053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.089 [2024-10-01 13:43:56.287087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.089 [2024-10-01 13:43:56.287105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.089 [2024-10-01 13:43:56.287120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.089 [2024-10-01 13:43:56.287190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.089 [2024-10-01 13:43:56.297319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.089 [2024-10-01 13:43:56.297477] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.089 [2024-10-01 13:43:56.297513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.089 [2024-10-01 13:43:56.297548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.089 [2024-10-01 13:43:56.297589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.089 [2024-10-01 13:43:56.297623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.089 [2024-10-01 13:43:56.297641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.089 [2024-10-01 13:43:56.297656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.089 [2024-10-01 13:43:56.297689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.089 [2024-10-01 13:43:56.307609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.089 [2024-10-01 13:43:56.307746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.089 [2024-10-01 13:43:56.307781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.089 [2024-10-01 13:43:56.307801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.089 [2024-10-01 13:43:56.307835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.089 [2024-10-01 13:43:56.307879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.089 [2024-10-01 13:43:56.307900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.089 [2024-10-01 13:43:56.307916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.089 [2024-10-01 13:43:56.307949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.089 [2024-10-01 13:43:56.317977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.089 [2024-10-01 13:43:56.318112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.089 [2024-10-01 13:43:56.318148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.089 [2024-10-01 13:43:56.318176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.089 [2024-10-01 13:43:56.319131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.089 [2024-10-01 13:43:56.319798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.089 [2024-10-01 13:43:56.319839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.089 [2024-10-01 13:43:56.319858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.089 [2024-10-01 13:43:56.319968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.089 [2024-10-01 13:43:56.328092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.089 [2024-10-01 13:43:56.328226] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.089 [2024-10-01 13:43:56.328260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.089 [2024-10-01 13:43:56.328306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.089 [2024-10-01 13:43:56.328343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.089 [2024-10-01 13:43:56.328376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.089 [2024-10-01 13:43:56.328394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.089 [2024-10-01 13:43:56.328408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.089 [2024-10-01 13:43:56.329179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.089 [2024-10-01 13:43:56.338300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.089 [2024-10-01 13:43:56.338491] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.089 [2024-10-01 13:43:56.338528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.089 [2024-10-01 13:43:56.338564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.090 [2024-10-01 13:43:56.338603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.090 [2024-10-01 13:43:56.338667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.090 [2024-10-01 13:43:56.338686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.090 [2024-10-01 13:43:56.338702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.090 [2024-10-01 13:43:56.338736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.090 [2024-10-01 13:43:56.348451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.090 [2024-10-01 13:43:56.348656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.090 [2024-10-01 13:43:56.348693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.090 [2024-10-01 13:43:56.348712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.090 [2024-10-01 13:43:56.348749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.090 [2024-10-01 13:43:56.348783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.090 [2024-10-01 13:43:56.348814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.090 [2024-10-01 13:43:56.348835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.090 [2024-10-01 13:43:56.348871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.090 [2024-10-01 13:43:56.359520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.090 [2024-10-01 13:43:56.359684] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.090 [2024-10-01 13:43:56.359719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.090 [2024-10-01 13:43:56.359739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.090 [2024-10-01 13:43:56.360705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.090 [2024-10-01 13:43:56.360922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.090 [2024-10-01 13:43:56.360982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.090 [2024-10-01 13:43:56.361001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.090 [2024-10-01 13:43:56.361094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.090 [2024-10-01 13:43:56.370277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.090 [2024-10-01 13:43:56.370402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.090 [2024-10-01 13:43:56.370437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.090 [2024-10-01 13:43:56.370455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.090 [2024-10-01 13:43:56.370489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.090 [2024-10-01 13:43:56.370528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.090 [2024-10-01 13:43:56.370562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.090 [2024-10-01 13:43:56.370578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.090 [2024-10-01 13:43:56.370612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.090 [2024-10-01 13:43:56.380420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.090 [2024-10-01 13:43:56.380579] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.090 [2024-10-01 13:43:56.380630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.090 [2024-10-01 13:43:56.380656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.090 [2024-10-01 13:43:56.380693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.090 [2024-10-01 13:43:56.380734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.090 [2024-10-01 13:43:56.380752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.090 [2024-10-01 13:43:56.380767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.090 [2024-10-01 13:43:56.380800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.090 [2024-10-01 13:43:56.391787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.090 [2024-10-01 13:43:56.392001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.090 [2024-10-01 13:43:56.392039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.090 [2024-10-01 13:43:56.392060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.090 [2024-10-01 13:43:56.392106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.090 [2024-10-01 13:43:56.392140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.090 [2024-10-01 13:43:56.392158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.090 [2024-10-01 13:43:56.392174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.090 [2024-10-01 13:43:56.392207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.090 [2024-10-01 13:43:56.402025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.090 [2024-10-01 13:43:56.402190] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.090 [2024-10-01 13:43:56.402224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.090 [2024-10-01 13:43:56.402244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.090 [2024-10-01 13:43:56.403217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.090 [2024-10-01 13:43:56.403447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.090 [2024-10-01 13:43:56.403484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.090 [2024-10-01 13:43:56.403503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.090 [2024-10-01 13:43:56.403600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.090 [2024-10-01 13:43:56.412876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.090 [2024-10-01 13:43:56.413008] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.090 [2024-10-01 13:43:56.413042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.090 [2024-10-01 13:43:56.413060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.090 [2024-10-01 13:43:56.413094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.090 [2024-10-01 13:43:56.413136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.090 [2024-10-01 13:43:56.413154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.090 [2024-10-01 13:43:56.413169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.090 [2024-10-01 13:43:56.413208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.090 [2024-10-01 13:43:56.422990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.090 [2024-10-01 13:43:56.423128] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.090 [2024-10-01 13:43:56.423163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.090 [2024-10-01 13:43:56.423183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.090 [2024-10-01 13:43:56.423217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.090 [2024-10-01 13:43:56.423250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.090 [2024-10-01 13:43:56.423268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.090 [2024-10-01 13:43:56.423282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.090 [2024-10-01 13:43:56.423319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.090 [2024-10-01 13:43:56.434039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.090 [2024-10-01 13:43:56.434164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.090 [2024-10-01 13:43:56.434198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.090 [2024-10-01 13:43:56.434216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.090 [2024-10-01 13:43:56.434275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.090 [2024-10-01 13:43:56.434309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.090 [2024-10-01 13:43:56.434327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.090 [2024-10-01 13:43:56.434341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.090 [2024-10-01 13:43:56.434373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.090 [2024-10-01 13:43:56.444231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.090 [2024-10-01 13:43:56.444356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.090 [2024-10-01 13:43:56.444395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.090 [2024-10-01 13:43:56.444414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.090 [2024-10-01 13:43:56.444461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.090 [2024-10-01 13:43:56.444498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.091 [2024-10-01 13:43:56.444515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.091 [2024-10-01 13:43:56.444530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.091 [2024-10-01 13:43:56.445486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.091 [2024-10-01 13:43:56.455164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.091 [2024-10-01 13:43:56.455362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.091 [2024-10-01 13:43:56.455401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.091 [2024-10-01 13:43:56.455421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.091 [2024-10-01 13:43:56.455457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.091 [2024-10-01 13:43:56.455491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.091 [2024-10-01 13:43:56.455510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.091 [2024-10-01 13:43:56.455526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.091 [2024-10-01 13:43:56.455578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.091 [2024-10-01 13:43:56.465300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.091 [2024-10-01 13:43:56.465429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.091 [2024-10-01 13:43:56.465462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.091 [2024-10-01 13:43:56.465480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.091 [2024-10-01 13:43:56.465513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.091 [2024-10-01 13:43:56.465561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.091 [2024-10-01 13:43:56.465582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.091 [2024-10-01 13:43:56.465623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.091 [2024-10-01 13:43:56.465658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.091 [2024-10-01 13:43:56.476305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.091 [2024-10-01 13:43:56.476436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.091 [2024-10-01 13:43:56.476471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.091 [2024-10-01 13:43:56.476491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.091 [2024-10-01 13:43:56.476526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.091 [2024-10-01 13:43:56.476576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.091 [2024-10-01 13:43:56.476595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.091 [2024-10-01 13:43:56.476610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.091 [2024-10-01 13:43:56.476642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.091 [2024-10-01 13:43:56.486409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.091 [2024-10-01 13:43:56.486547] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.091 [2024-10-01 13:43:56.486581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.091 [2024-10-01 13:43:56.486600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.091 [2024-10-01 13:43:56.486648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.091 [2024-10-01 13:43:56.486685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.091 [2024-10-01 13:43:56.486703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.091 [2024-10-01 13:43:56.486718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.091 [2024-10-01 13:43:56.486750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.091 [2024-10-01 13:43:56.496507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.091 [2024-10-01 13:43:56.496643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.091 [2024-10-01 13:43:56.496678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.091 [2024-10-01 13:43:56.496696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.091 [2024-10-01 13:43:56.496729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.091 [2024-10-01 13:43:56.496762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.091 [2024-10-01 13:43:56.496780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.091 [2024-10-01 13:43:56.496794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.091 [2024-10-01 13:43:56.496826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.091 [2024-10-01 13:43:56.508156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.091 [2024-10-01 13:43:56.508551] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.091 [2024-10-01 13:43:56.508634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.091 [2024-10-01 13:43:56.508659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.091 [2024-10-01 13:43:56.508707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.091 [2024-10-01 13:43:56.508745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.091 [2024-10-01 13:43:56.508763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.091 [2024-10-01 13:43:56.508778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.091 [2024-10-01 13:43:56.508815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.091 [2024-10-01 13:43:56.518297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.091 [2024-10-01 13:43:56.518444] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.091 [2024-10-01 13:43:56.518479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.091 [2024-10-01 13:43:56.518497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.091 [2024-10-01 13:43:56.518531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.091 [2024-10-01 13:43:56.518581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.091 [2024-10-01 13:43:56.518600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.091 [2024-10-01 13:43:56.518615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.091 [2024-10-01 13:43:56.518647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.091 [2024-10-01 13:43:56.528431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.091 [2024-10-01 13:43:56.528584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.091 [2024-10-01 13:43:56.528619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.091 [2024-10-01 13:43:56.528638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.091 [2024-10-01 13:43:56.528672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.091 [2024-10-01 13:43:56.528705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.091 [2024-10-01 13:43:56.528723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.091 [2024-10-01 13:43:56.528737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.091 [2024-10-01 13:43:56.528769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.091 [2024-10-01 13:43:56.539177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.091 [2024-10-01 13:43:56.539319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.091 [2024-10-01 13:43:56.539353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.091 [2024-10-01 13:43:56.539372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.091 [2024-10-01 13:43:56.539406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.091 [2024-10-01 13:43:56.539485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.091 [2024-10-01 13:43:56.539506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.091 [2024-10-01 13:43:56.539525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.091 [2024-10-01 13:43:56.539600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.091 [2024-10-01 13:43:56.549433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.091 [2024-10-01 13:43:56.549579] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.091 [2024-10-01 13:43:56.549615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.091 [2024-10-01 13:43:56.549634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.092 [2024-10-01 13:43:56.549670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.092 [2024-10-01 13:43:56.549703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.092 [2024-10-01 13:43:56.549721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.092 [2024-10-01 13:43:56.549736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.092 [2024-10-01 13:43:56.549768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.092 [2024-10-01 13:43:56.559553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.092 [2024-10-01 13:43:56.559680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.092 [2024-10-01 13:43:56.559713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.092 [2024-10-01 13:43:56.559732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.092 [2024-10-01 13:43:56.559766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.092 [2024-10-01 13:43:56.559798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.092 [2024-10-01 13:43:56.559816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.092 [2024-10-01 13:43:56.559830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.092 [2024-10-01 13:43:56.559862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.092 [2024-10-01 13:43:56.570259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.092 [2024-10-01 13:43:56.570465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.092 [2024-10-01 13:43:56.570502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.092 [2024-10-01 13:43:56.570522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.092 [2024-10-01 13:43:56.570576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.092 [2024-10-01 13:43:56.570612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.092 [2024-10-01 13:43:56.570630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.092 [2024-10-01 13:43:56.570646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.092 [2024-10-01 13:43:56.570718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.092 [2024-10-01 13:43:56.580612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.092 [2024-10-01 13:43:56.580738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.092 [2024-10-01 13:43:56.580772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.092 [2024-10-01 13:43:56.580790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.092 [2024-10-01 13:43:56.580824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.092 [2024-10-01 13:43:56.581755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.092 [2024-10-01 13:43:56.581795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.092 [2024-10-01 13:43:56.581813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.092 [2024-10-01 13:43:56.582005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.092 [2024-10-01 13:43:56.591510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.092 [2024-10-01 13:43:56.591651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.092 [2024-10-01 13:43:56.591685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.092 [2024-10-01 13:43:56.591704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.092 [2024-10-01 13:43:56.591738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.092 [2024-10-01 13:43:56.591771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.092 [2024-10-01 13:43:56.591788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.092 [2024-10-01 13:43:56.591802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.092 [2024-10-01 13:43:56.591834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.092 [2024-10-01 13:43:56.601769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.092 [2024-10-01 13:43:56.601908] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.092 [2024-10-01 13:43:56.601959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.092 [2024-10-01 13:43:56.601980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.092 [2024-10-01 13:43:56.602015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.092 [2024-10-01 13:43:56.602048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.092 [2024-10-01 13:43:56.602066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.092 [2024-10-01 13:43:56.602085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.092 [2024-10-01 13:43:56.602125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.092 [2024-10-01 13:43:56.612914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.092 [2024-10-01 13:43:56.613048] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.092 [2024-10-01 13:43:56.613088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.092 [2024-10-01 13:43:56.613134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.092 [2024-10-01 13:43:56.613172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.092 [2024-10-01 13:43:56.613206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.092 [2024-10-01 13:43:56.613224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.092 [2024-10-01 13:43:56.613240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.092 [2024-10-01 13:43:56.613273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.092 [2024-10-01 13:43:56.623022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.092 [2024-10-01 13:43:56.623165] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.092 [2024-10-01 13:43:56.623199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.092 [2024-10-01 13:43:56.623217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.092 [2024-10-01 13:43:56.624191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.092 [2024-10-01 13:43:56.624440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.092 [2024-10-01 13:43:56.624479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.092 [2024-10-01 13:43:56.624498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.092 [2024-10-01 13:43:56.624595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.092 [2024-10-01 13:43:56.633937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.092 [2024-10-01 13:43:56.634113] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.092 [2024-10-01 13:43:56.634151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.092 [2024-10-01 13:43:56.634172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.092 [2024-10-01 13:43:56.634219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.092 [2024-10-01 13:43:56.634253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.092 [2024-10-01 13:43:56.634271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.092 [2024-10-01 13:43:56.634287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.092 [2024-10-01 13:43:56.634584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.092 [2024-10-01 13:43:56.644093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.092 [2024-10-01 13:43:56.644314] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.092 [2024-10-01 13:43:56.644352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.092 [2024-10-01 13:43:56.644381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.092 [2024-10-01 13:43:56.644423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.092 [2024-10-01 13:43:56.644458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.092 [2024-10-01 13:43:56.644506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.092 [2024-10-01 13:43:56.644524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.092 [2024-10-01 13:43:56.644576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.092 [2024-10-01 13:43:56.655333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.092 [2024-10-01 13:43:56.655498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.092 [2024-10-01 13:43:56.655551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.092 [2024-10-01 13:43:56.655574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.092 [2024-10-01 13:43:56.655611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.092 [2024-10-01 13:43:56.655645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.093 [2024-10-01 13:43:56.655663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.093 [2024-10-01 13:43:56.655678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.093 [2024-10-01 13:43:56.655710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.093 8218.75 IOPS, 32.10 MiB/s [2024-10-01 13:43:56.667080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.093 [2024-10-01 13:43:56.668730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.093 [2024-10-01 13:43:56.668779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.093 [2024-10-01 13:43:56.668801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.093 [2024-10-01 13:43:56.669720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.093 [2024-10-01 13:43:56.669925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.093 [2024-10-01 13:43:56.669965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.093 [2024-10-01 13:43:56.669983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.093 [2024-10-01 13:43:56.670024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.093 [2024-10-01 13:43:56.677182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.093 [2024-10-01 13:43:56.677306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.093 [2024-10-01 13:43:56.677344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.093 [2024-10-01 13:43:56.677364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.093 [2024-10-01 13:43:56.677398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.093 [2024-10-01 13:43:56.677431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.093 [2024-10-01 13:43:56.677448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.093 [2024-10-01 13:43:56.677462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.093 [2024-10-01 13:43:56.677500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.093 [2024-10-01 13:43:56.687280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.093 [2024-10-01 13:43:56.687409] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.093 [2024-10-01 13:43:56.687444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.093 [2024-10-01 13:43:56.687462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.093 [2024-10-01 13:43:56.687496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.093 [2024-10-01 13:43:56.688342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.093 [2024-10-01 13:43:56.688384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.093 [2024-10-01 13:43:56.688403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.093 [2024-10-01 13:43:56.688622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.093 [2024-10-01 13:43:56.697382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.093 [2024-10-01 13:43:56.697504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.093 [2024-10-01 13:43:56.697551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.093 [2024-10-01 13:43:56.697573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.093 [2024-10-01 13:43:56.697607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.093 [2024-10-01 13:43:56.697640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.093 [2024-10-01 13:43:56.697657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.093 [2024-10-01 13:43:56.697671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.093 [2024-10-01 13:43:56.697703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.093 [2024-10-01 13:43:56.707903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.093 [2024-10-01 13:43:56.708041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.093 [2024-10-01 13:43:56.708075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.093 [2024-10-01 13:43:56.708094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.093 [2024-10-01 13:43:56.709058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.093 [2024-10-01 13:43:56.709305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.093 [2024-10-01 13:43:56.709345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.093 [2024-10-01 13:43:56.709365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.093 [2024-10-01 13:43:56.709446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.093 [2024-10-01 13:43:56.719138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.093 [2024-10-01 13:43:56.719260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.093 [2024-10-01 13:43:56.719293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.093 [2024-10-01 13:43:56.719311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.093 [2024-10-01 13:43:56.719366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.093 [2024-10-01 13:43:56.719400] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.093 [2024-10-01 13:43:56.719418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.093 [2024-10-01 13:43:56.719432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.093 [2024-10-01 13:43:56.719464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.093 [2024-10-01 13:43:56.729444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.093 [2024-10-01 13:43:56.729600] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.093 [2024-10-01 13:43:56.729636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.093 [2024-10-01 13:43:56.729656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.093 [2024-10-01 13:43:56.729691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.093 [2024-10-01 13:43:56.729724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.093 [2024-10-01 13:43:56.729741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.093 [2024-10-01 13:43:56.729764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.093 [2024-10-01 13:43:56.729805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.093 [2024-10-01 13:43:56.741023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.093 [2024-10-01 13:43:56.741149] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.093 [2024-10-01 13:43:56.741187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.093 [2024-10-01 13:43:56.741208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.093 [2024-10-01 13:43:56.741243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.093 [2024-10-01 13:43:56.741276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.093 [2024-10-01 13:43:56.741293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.093 [2024-10-01 13:43:56.741307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.093 [2024-10-01 13:43:56.741340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.093 [2024-10-01 13:43:56.751122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.093 [2024-10-01 13:43:56.751245] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.093 [2024-10-01 13:43:56.751279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.093 [2024-10-01 13:43:56.751297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.093 [2024-10-01 13:43:56.752265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.094 [2024-10-01 13:43:56.752500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.094 [2024-10-01 13:43:56.752553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.094 [2024-10-01 13:43:56.752594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.094 [2024-10-01 13:43:56.752679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.094 [2024-10-01 13:43:56.762304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.094 [2024-10-01 13:43:56.762442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.094 [2024-10-01 13:43:56.762477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.094 [2024-10-01 13:43:56.762496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.094 [2024-10-01 13:43:56.762530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.094 [2024-10-01 13:43:56.762583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.094 [2024-10-01 13:43:56.762602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.094 [2024-10-01 13:43:56.762616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.094 [2024-10-01 13:43:56.762894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.094 [2024-10-01 13:43:56.772431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.094 [2024-10-01 13:43:56.772572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.094 [2024-10-01 13:43:56.772607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.094 [2024-10-01 13:43:56.772626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.094 [2024-10-01 13:43:56.772661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.094 [2024-10-01 13:43:56.772694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.094 [2024-10-01 13:43:56.772712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.094 [2024-10-01 13:43:56.772727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.094 [2024-10-01 13:43:56.772759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.094 [2024-10-01 13:43:56.783957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.094 [2024-10-01 13:43:56.784166] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.094 [2024-10-01 13:43:56.784203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.094 [2024-10-01 13:43:56.784223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.094 [2024-10-01 13:43:56.784260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.094 [2024-10-01 13:43:56.784294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.094 [2024-10-01 13:43:56.784311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.094 [2024-10-01 13:43:56.784328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.094 [2024-10-01 13:43:56.784360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.094 [2024-10-01 13:43:56.794521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.094 [2024-10-01 13:43:56.794699] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.094 [2024-10-01 13:43:56.794734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.094 [2024-10-01 13:43:56.794773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.094 [2024-10-01 13:43:56.795762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.094 [2024-10-01 13:43:56.796012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.094 [2024-10-01 13:43:56.796052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.094 [2024-10-01 13:43:56.796071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.094 [2024-10-01 13:43:56.796158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.094 [2024-10-01 13:43:56.805570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.094 [2024-10-01 13:43:56.805695] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.094 [2024-10-01 13:43:56.805728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.094 [2024-10-01 13:43:56.805747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.094 [2024-10-01 13:43:56.805781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.094 [2024-10-01 13:43:56.805814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.094 [2024-10-01 13:43:56.805832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.094 [2024-10-01 13:43:56.805846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.094 [2024-10-01 13:43:56.805878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.094 [2024-10-01 13:43:56.816304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.094 [2024-10-01 13:43:56.816443] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.094 [2024-10-01 13:43:56.816480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.094 [2024-10-01 13:43:56.816499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.094 [2024-10-01 13:43:56.816548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.094 [2024-10-01 13:43:56.816585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.094 [2024-10-01 13:43:56.816605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.094 [2024-10-01 13:43:56.816619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.094 [2024-10-01 13:43:56.816658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.094 [2024-10-01 13:43:56.828043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.094 [2024-10-01 13:43:56.828189] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.094 [2024-10-01 13:43:56.828224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.094 [2024-10-01 13:43:56.828242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.094 [2024-10-01 13:43:56.828287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.094 [2024-10-01 13:43:56.828346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.094 [2024-10-01 13:43:56.828365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.094 [2024-10-01 13:43:56.828380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.094 [2024-10-01 13:43:56.828413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.094 [2024-10-01 13:43:56.838581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.094 [2024-10-01 13:43:56.838721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.094 [2024-10-01 13:43:56.838764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.094 [2024-10-01 13:43:56.838785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.094 [2024-10-01 13:43:56.839725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.094 [2024-10-01 13:43:56.839959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.094 [2024-10-01 13:43:56.839996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.094 [2024-10-01 13:43:56.840013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.094 [2024-10-01 13:43:56.840119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.094 [2024-10-01 13:43:56.849460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.094 [2024-10-01 13:43:56.849612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.094 [2024-10-01 13:43:56.849646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.094 [2024-10-01 13:43:56.849666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.094 [2024-10-01 13:43:56.849701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.094 [2024-10-01 13:43:56.849734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.094 [2024-10-01 13:43:56.849751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.094 [2024-10-01 13:43:56.849765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.094 [2024-10-01 13:43:56.849798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.094 [2024-10-01 13:43:56.859948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.094 [2024-10-01 13:43:56.860091] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.094 [2024-10-01 13:43:56.860125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.094 [2024-10-01 13:43:56.860145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.094 [2024-10-01 13:43:56.860179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.094 [2024-10-01 13:43:56.860212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.094 [2024-10-01 13:43:56.860229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.095 [2024-10-01 13:43:56.860244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.095 [2024-10-01 13:43:56.860307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.095 [2024-10-01 13:43:56.871352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.095 [2024-10-01 13:43:56.871521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.095 [2024-10-01 13:43:56.871578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.095 [2024-10-01 13:43:56.871603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.095 [2024-10-01 13:43:56.871641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.095 [2024-10-01 13:43:56.871674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.095 [2024-10-01 13:43:56.871692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.095 [2024-10-01 13:43:56.871707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.095 [2024-10-01 13:43:56.871742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.095 [2024-10-01 13:43:56.881656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.095 [2024-10-01 13:43:56.881811] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.095 [2024-10-01 13:43:56.881846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.095 [2024-10-01 13:43:56.881867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.095 [2024-10-01 13:43:56.882838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.095 [2024-10-01 13:43:56.883062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.095 [2024-10-01 13:43:56.883100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.095 [2024-10-01 13:43:56.883127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.095 [2024-10-01 13:43:56.883209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.095 [2024-10-01 13:43:56.892952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.095 [2024-10-01 13:43:56.893095] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.095 [2024-10-01 13:43:56.893130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.095 [2024-10-01 13:43:56.893149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.095 [2024-10-01 13:43:56.893194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.095 [2024-10-01 13:43:56.893228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.095 [2024-10-01 13:43:56.893245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.095 [2024-10-01 13:43:56.893261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.095 [2024-10-01 13:43:56.893293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.095 [2024-10-01 13:43:56.903158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.095 [2024-10-01 13:43:56.903342] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.095 [2024-10-01 13:43:56.903378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.095 [2024-10-01 13:43:56.903425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.095 [2024-10-01 13:43:56.903465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.095 [2024-10-01 13:43:56.903499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.095 [2024-10-01 13:43:56.903517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.095 [2024-10-01 13:43:56.903547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.095 [2024-10-01 13:43:56.903586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.095 [2024-10-01 13:43:56.914322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.095 [2024-10-01 13:43:56.914454] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.095 [2024-10-01 13:43:56.914489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.095 [2024-10-01 13:43:56.914508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.095 [2024-10-01 13:43:56.914558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.095 [2024-10-01 13:43:56.914611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.095 [2024-10-01 13:43:56.914634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.095 [2024-10-01 13:43:56.914649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.095 [2024-10-01 13:43:56.914683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.095 [2024-10-01 13:43:56.924549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.095 [2024-10-01 13:43:56.924668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.095 [2024-10-01 13:43:56.924702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.095 [2024-10-01 13:43:56.924721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.095 [2024-10-01 13:43:56.925653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.095 [2024-10-01 13:43:56.925871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.095 [2024-10-01 13:43:56.925899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.095 [2024-10-01 13:43:56.925915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.095 [2024-10-01 13:43:56.925994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.095 [2024-10-01 13:43:56.935454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.095 [2024-10-01 13:43:56.935614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.095 [2024-10-01 13:43:56.935653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.095 [2024-10-01 13:43:56.935672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.095 [2024-10-01 13:43:56.935721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.095 [2024-10-01 13:43:56.935759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.095 [2024-10-01 13:43:56.935803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.095 [2024-10-01 13:43:56.935819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.095 [2024-10-01 13:43:56.935853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.095 [2024-10-01 13:43:56.945620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.095 [2024-10-01 13:43:56.945760] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.095 [2024-10-01 13:43:56.945793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.095 [2024-10-01 13:43:56.945812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.095 [2024-10-01 13:43:56.945846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.095 [2024-10-01 13:43:56.945878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.095 [2024-10-01 13:43:56.945896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.095 [2024-10-01 13:43:56.945910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.095 [2024-10-01 13:43:56.945942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.095 [2024-10-01 13:43:56.956757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.095 [2024-10-01 13:43:56.956896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.095 [2024-10-01 13:43:56.956930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.095 [2024-10-01 13:43:56.956948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.095 [2024-10-01 13:43:56.956982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.095 [2024-10-01 13:43:56.957030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.095 [2024-10-01 13:43:56.957052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.095 [2024-10-01 13:43:56.957067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.095 [2024-10-01 13:43:56.957100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.095 [2024-10-01 13:43:56.966944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.095 [2024-10-01 13:43:56.967073] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.095 [2024-10-01 13:43:56.967107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.095 [2024-10-01 13:43:56.967126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.095 [2024-10-01 13:43:56.967160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.095 [2024-10-01 13:43:56.968129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.095 [2024-10-01 13:43:56.968171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.095 [2024-10-01 13:43:56.968191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.095 [2024-10-01 13:43:56.968399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.096 [2024-10-01 13:43:56.977844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.096 [2024-10-01 13:43:56.977979] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.096 [2024-10-01 13:43:56.978013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.096 [2024-10-01 13:43:56.978032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.096 [2024-10-01 13:43:56.978081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.096 [2024-10-01 13:43:56.978118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.096 [2024-10-01 13:43:56.978136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.096 [2024-10-01 13:43:56.978151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.096 [2024-10-01 13:43:56.978183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.096 [2024-10-01 13:43:56.988034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.096 [2024-10-01 13:43:56.988172] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.096 [2024-10-01 13:43:56.988206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.096 [2024-10-01 13:43:56.988224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.096 [2024-10-01 13:43:56.988259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.096 [2024-10-01 13:43:56.988292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.096 [2024-10-01 13:43:56.988310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.096 [2024-10-01 13:43:56.988325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.096 [2024-10-01 13:43:56.988357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.096 [2024-10-01 13:43:56.999150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.096 [2024-10-01 13:43:56.999285] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.096 [2024-10-01 13:43:56.999319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.096 [2024-10-01 13:43:56.999343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.096 [2024-10-01 13:43:56.999386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.096 [2024-10-01 13:43:56.999436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.096 [2024-10-01 13:43:56.999459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.096 [2024-10-01 13:43:56.999474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.096 [2024-10-01 13:43:56.999506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.096 [2024-10-01 13:43:57.009373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.096 [2024-10-01 13:43:57.009497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.096 [2024-10-01 13:43:57.009530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.096 [2024-10-01 13:43:57.009568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.096 [2024-10-01 13:43:57.010522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.096 [2024-10-01 13:43:57.010761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.096 [2024-10-01 13:43:57.010800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.096 [2024-10-01 13:43:57.010818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.096 [2024-10-01 13:43:57.010898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.096 [2024-10-01 13:43:57.020413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.096 [2024-10-01 13:43:57.020615] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.096 [2024-10-01 13:43:57.020653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.096 [2024-10-01 13:43:57.020673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.096 [2024-10-01 13:43:57.020710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.096 [2024-10-01 13:43:57.020744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.096 [2024-10-01 13:43:57.020761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.096 [2024-10-01 13:43:57.020777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.096 [2024-10-01 13:43:57.020810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.096 [2024-10-01 13:43:57.030567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.096 [2024-10-01 13:43:57.030704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.096 [2024-10-01 13:43:57.030738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.096 [2024-10-01 13:43:57.030757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.096 [2024-10-01 13:43:57.030792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.096 [2024-10-01 13:43:57.030825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.096 [2024-10-01 13:43:57.030843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.096 [2024-10-01 13:43:57.030857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.096 [2024-10-01 13:43:57.030889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.096 [2024-10-01 13:43:57.041727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.096 [2024-10-01 13:43:57.041860] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.096 [2024-10-01 13:43:57.041893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.096 [2024-10-01 13:43:57.041912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.096 [2024-10-01 13:43:57.041947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.096 [2024-10-01 13:43:57.041995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.096 [2024-10-01 13:43:57.042017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.096 [2024-10-01 13:43:57.042061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.096 [2024-10-01 13:43:57.042096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.096 [2024-10-01 13:43:57.051928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.096 [2024-10-01 13:43:57.052051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.096 [2024-10-01 13:43:57.052084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.096 [2024-10-01 13:43:57.052103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.096 [2024-10-01 13:43:57.052137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.096 [2024-10-01 13:43:57.053065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.096 [2024-10-01 13:43:57.053105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.096 [2024-10-01 13:43:57.053123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.096 [2024-10-01 13:43:57.053311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.096 [2024-10-01 13:43:57.062744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.096 [2024-10-01 13:43:57.062867] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.096 [2024-10-01 13:43:57.062900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.096 [2024-10-01 13:43:57.062918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.096 [2024-10-01 13:43:57.062965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.096 [2024-10-01 13:43:57.063002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.096 [2024-10-01 13:43:57.063021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.096 [2024-10-01 13:43:57.063035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.096 [2024-10-01 13:43:57.063067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.096 [2024-10-01 13:43:57.072921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.096 [2024-10-01 13:43:57.073043] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.096 [2024-10-01 13:43:57.073077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.096 [2024-10-01 13:43:57.073096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.096 [2024-10-01 13:43:57.073129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.096 [2024-10-01 13:43:57.073168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.096 [2024-10-01 13:43:57.073190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.096 [2024-10-01 13:43:57.073204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.097 [2024-10-01 13:43:57.073236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.097 [2024-10-01 13:43:57.084191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.097 [2024-10-01 13:43:57.084394] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.097 [2024-10-01 13:43:57.084430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.097 [2024-10-01 13:43:57.084449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.097 [2024-10-01 13:43:57.084483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.097 [2024-10-01 13:43:57.084516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.097 [2024-10-01 13:43:57.084548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.097 [2024-10-01 13:43:57.084567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.097 [2024-10-01 13:43:57.084600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.097 [2024-10-01 13:43:57.094641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.097 [2024-10-01 13:43:57.094843] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.097 [2024-10-01 13:43:57.094881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.097 [2024-10-01 13:43:57.094902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.097 [2024-10-01 13:43:57.095855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.097 [2024-10-01 13:43:57.096100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.097 [2024-10-01 13:43:57.096139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.097 [2024-10-01 13:43:57.096158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.097 [2024-10-01 13:43:57.096252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.097 [2024-10-01 13:43:57.105990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.097 [2024-10-01 13:43:57.106148] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.097 [2024-10-01 13:43:57.106186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.097 [2024-10-01 13:43:57.106206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.097 [2024-10-01 13:43:57.106241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.097 [2024-10-01 13:43:57.106275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.097 [2024-10-01 13:43:57.106293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.097 [2024-10-01 13:43:57.106309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.097 [2024-10-01 13:43:57.106362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.097 [2024-10-01 13:43:57.116504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.097 [2024-10-01 13:43:57.116723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.097 [2024-10-01 13:43:57.116761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.097 [2024-10-01 13:43:57.116782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.097 [2024-10-01 13:43:57.116818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.097 [2024-10-01 13:43:57.116894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.097 [2024-10-01 13:43:57.116914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.097 [2024-10-01 13:43:57.116930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.097 [2024-10-01 13:43:57.116963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.097 [2024-10-01 13:43:57.128121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.097 [2024-10-01 13:43:57.128324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.097 [2024-10-01 13:43:57.128361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.097 [2024-10-01 13:43:57.128381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.097 [2024-10-01 13:43:57.128417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.097 [2024-10-01 13:43:57.128451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.097 [2024-10-01 13:43:57.128469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.097 [2024-10-01 13:43:57.128485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.097 [2024-10-01 13:43:57.128518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.097 [2024-10-01 13:43:57.138654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.097 [2024-10-01 13:43:57.138814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.097 [2024-10-01 13:43:57.138851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.097 [2024-10-01 13:43:57.138871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.097 [2024-10-01 13:43:57.139822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.097 [2024-10-01 13:43:57.140061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.097 [2024-10-01 13:43:57.140101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.097 [2024-10-01 13:43:57.140121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.097 [2024-10-01 13:43:57.140202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.097 [2024-10-01 13:43:57.150868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.097 [2024-10-01 13:43:57.151185] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.097 [2024-10-01 13:43:57.151252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.097 [2024-10-01 13:43:57.151295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.097 [2024-10-01 13:43:57.152853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.097 [2024-10-01 13:43:57.153231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.097 [2024-10-01 13:43:57.153292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.097 [2024-10-01 13:43:57.153327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.097 [2024-10-01 13:43:57.154525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.097 [2024-10-01 13:43:57.161018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.097 [2024-10-01 13:43:57.161195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.097 [2024-10-01 13:43:57.161260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.097 [2024-10-01 13:43:57.161296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.097 [2024-10-01 13:43:57.162347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.097 [2024-10-01 13:43:57.162700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.097 [2024-10-01 13:43:57.162764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.097 [2024-10-01 13:43:57.162800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.097 [2024-10-01 13:43:57.164046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.097 [2024-10-01 13:43:57.173822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.097 [2024-10-01 13:43:57.175067] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.097 [2024-10-01 13:43:57.175137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.097 [2024-10-01 13:43:57.175175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.097 [2024-10-01 13:43:57.175420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.097 [2024-10-01 13:43:57.175502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.097 [2024-10-01 13:43:57.175560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.097 [2024-10-01 13:43:57.175593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.097 [2024-10-01 13:43:57.175651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.097 [2024-10-01 13:43:57.188638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.097 [2024-10-01 13:43:57.190055] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.097 [2024-10-01 13:43:57.190130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.097 [2024-10-01 13:43:57.190167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.097 [2024-10-01 13:43:57.191354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.097 [2024-10-01 13:43:57.191702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.097 [2024-10-01 13:43:57.191759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.097 [2024-10-01 13:43:57.191790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.097 [2024-10-01 13:43:57.193075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.098 [2024-10-01 13:43:57.200848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.098 [2024-10-01 13:43:57.202499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.098 [2024-10-01 13:43:57.202588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.098 [2024-10-01 13:43:57.202664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.098 [2024-10-01 13:43:57.203804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.098 [2024-10-01 13:43:57.204030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.098 [2024-10-01 13:43:57.204089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.098 [2024-10-01 13:43:57.204122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.098 [2024-10-01 13:43:57.204184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.098 [2024-10-01 13:43:57.214768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.098 [2024-10-01 13:43:57.215167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.098 [2024-10-01 13:43:57.215241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.098 [2024-10-01 13:43:57.215279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.098 [2024-10-01 13:43:57.216856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.098 [2024-10-01 13:43:57.218081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.098 [2024-10-01 13:43:57.218145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.098 [2024-10-01 13:43:57.218180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.098 [2024-10-01 13:43:57.218357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.098 [2024-10-01 13:43:57.228192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.098 [2024-10-01 13:43:57.229502] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.098 [2024-10-01 13:43:57.229589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.098 [2024-10-01 13:43:57.229630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.098 [2024-10-01 13:43:57.231337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.098 [2024-10-01 13:43:57.232426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.098 [2024-10-01 13:43:57.232477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.098 [2024-10-01 13:43:57.232500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.098 [2024-10-01 13:43:57.232644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.098 [2024-10-01 13:43:57.238945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.098 [2024-10-01 13:43:57.239076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.098 [2024-10-01 13:43:57.239111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.098 [2024-10-01 13:43:57.239130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.098 [2024-10-01 13:43:57.239169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.098 [2024-10-01 13:43:57.240115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.098 [2024-10-01 13:43:57.240185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.098 [2024-10-01 13:43:57.240205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.098 [2024-10-01 13:43:57.240440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.098 [2024-10-01 13:43:57.249951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.098 [2024-10-01 13:43:57.250080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.098 [2024-10-01 13:43:57.250114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.098 [2024-10-01 13:43:57.250132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.098 [2024-10-01 13:43:57.250170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.098 [2024-10-01 13:43:57.250208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.098 [2024-10-01 13:43:57.250226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.098 [2024-10-01 13:43:57.250240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.098 [2024-10-01 13:43:57.250277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.098 [2024-10-01 13:43:57.260178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.098 [2024-10-01 13:43:57.260304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.098 [2024-10-01 13:43:57.260337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.098 [2024-10-01 13:43:57.260356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.098 [2024-10-01 13:43:57.260393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.098 [2024-10-01 13:43:57.260430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.098 [2024-10-01 13:43:57.260449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.098 [2024-10-01 13:43:57.260463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.098 [2024-10-01 13:43:57.260499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.098 [2024-10-01 13:43:57.271444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.098 [2024-10-01 13:43:57.271636] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.098 [2024-10-01 13:43:57.271675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.098 [2024-10-01 13:43:57.271695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.098 [2024-10-01 13:43:57.271737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.098 [2024-10-01 13:43:57.271775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.098 [2024-10-01 13:43:57.271792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.098 [2024-10-01 13:43:57.271808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.098 [2024-10-01 13:43:57.271847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.098 [2024-10-01 13:43:57.281997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.098 [2024-10-01 13:43:57.282192] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.098 [2024-10-01 13:43:57.282230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.098 [2024-10-01 13:43:57.282250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.098 [2024-10-01 13:43:57.283204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.098 [2024-10-01 13:43:57.283476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.098 [2024-10-01 13:43:57.283515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.098 [2024-10-01 13:43:57.283549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.098 [2024-10-01 13:43:57.283641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.098 [2024-10-01 13:43:57.293093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.098 [2024-10-01 13:43:57.293228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.098 [2024-10-01 13:43:57.293263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.098 [2024-10-01 13:43:57.293282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.098 [2024-10-01 13:43:57.293321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.098 [2024-10-01 13:43:57.293359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.098 [2024-10-01 13:43:57.293377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.099 [2024-10-01 13:43:57.293392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.099 [2024-10-01 13:43:57.293428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.099 [2024-10-01 13:43:57.303280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.099 [2024-10-01 13:43:57.303410] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.099 [2024-10-01 13:43:57.303444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.099 [2024-10-01 13:43:57.303462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.099 [2024-10-01 13:43:57.303500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.099 [2024-10-01 13:43:57.303553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.099 [2024-10-01 13:43:57.303575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.099 [2024-10-01 13:43:57.303590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.099 [2024-10-01 13:43:57.303627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.099 [2024-10-01 13:43:57.314628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.099 [2024-10-01 13:43:57.314764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.099 [2024-10-01 13:43:57.314797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.099 [2024-10-01 13:43:57.314816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.099 [2024-10-01 13:43:57.314878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.099 [2024-10-01 13:43:57.314917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.099 [2024-10-01 13:43:57.314935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.099 [2024-10-01 13:43:57.314949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.099 [2024-10-01 13:43:57.314985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.099 [2024-10-01 13:43:57.325439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.099 [2024-10-01 13:43:57.325580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.099 [2024-10-01 13:43:57.325622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.099 [2024-10-01 13:43:57.325640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.099 [2024-10-01 13:43:57.325678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.099 [2024-10-01 13:43:57.326614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.099 [2024-10-01 13:43:57.326654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.099 [2024-10-01 13:43:57.326671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.099 [2024-10-01 13:43:57.326912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.099 [2024-10-01 13:43:57.336280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.099 [2024-10-01 13:43:57.336403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.099 [2024-10-01 13:43:57.336436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.099 [2024-10-01 13:43:57.336455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.099 [2024-10-01 13:43:57.336493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.099 [2024-10-01 13:43:57.336530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.099 [2024-10-01 13:43:57.336564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.099 [2024-10-01 13:43:57.336580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.099 [2024-10-01 13:43:57.336618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.099 [2024-10-01 13:43:57.346404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.099 [2024-10-01 13:43:57.346527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.099 [2024-10-01 13:43:57.346576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.099 [2024-10-01 13:43:57.346595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.099 [2024-10-01 13:43:57.346634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.099 [2024-10-01 13:43:57.346670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.099 [2024-10-01 13:43:57.346688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.099 [2024-10-01 13:43:57.346726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.099 [2024-10-01 13:43:57.346766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.099 [2024-10-01 13:43:57.357939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.099 [2024-10-01 13:43:57.358074] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.099 [2024-10-01 13:43:57.358108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.099 [2024-10-01 13:43:57.358127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.099 [2024-10-01 13:43:57.358165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.099 [2024-10-01 13:43:57.358202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.099 [2024-10-01 13:43:57.358220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.099 [2024-10-01 13:43:57.358234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.099 [2024-10-01 13:43:57.358270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.099 [2024-10-01 13:43:57.368174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.099 [2024-10-01 13:43:57.368310] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.099 [2024-10-01 13:43:57.368344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.099 [2024-10-01 13:43:57.368363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.099 [2024-10-01 13:43:57.368418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.099 [2024-10-01 13:43:57.369361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.099 [2024-10-01 13:43:57.369401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.099 [2024-10-01 13:43:57.369420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.099 [2024-10-01 13:43:57.369628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.099 [2024-10-01 13:43:57.379166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.099 [2024-10-01 13:43:57.379317] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.099 [2024-10-01 13:43:57.379351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.099 [2024-10-01 13:43:57.379370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.099 [2024-10-01 13:43:57.379408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.099 [2024-10-01 13:43:57.379445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.099 [2024-10-01 13:43:57.379463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.099 [2024-10-01 13:43:57.379477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.099 [2024-10-01 13:43:57.379514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.099 [2024-10-01 13:43:57.389664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.099 [2024-10-01 13:43:57.389894] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.099 [2024-10-01 13:43:57.389931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.099 [2024-10-01 13:43:57.389950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.099 [2024-10-01 13:43:57.389990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.099 [2024-10-01 13:43:57.390028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.099 [2024-10-01 13:43:57.390046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.099 [2024-10-01 13:43:57.390062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.099 [2024-10-01 13:43:57.390100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.099 [2024-10-01 13:43:57.400929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.099 [2024-10-01 13:43:57.401103] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.099 [2024-10-01 13:43:57.401140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.099 [2024-10-01 13:43:57.401159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.099 [2024-10-01 13:43:57.401215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.100 [2024-10-01 13:43:57.401257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.100 [2024-10-01 13:43:57.401275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.100 [2024-10-01 13:43:57.401291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.100 [2024-10-01 13:43:57.401328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.100 [2024-10-01 13:43:57.411095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.100 [2024-10-01 13:43:57.411222] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.100 [2024-10-01 13:43:57.411255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.100 [2024-10-01 13:43:57.411274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.100 [2024-10-01 13:43:57.411312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.100 [2024-10-01 13:43:57.412259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.100 [2024-10-01 13:43:57.412300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.100 [2024-10-01 13:43:57.412319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.100 [2024-10-01 13:43:57.412525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.100 [2024-10-01 13:43:57.421924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.100 [2024-10-01 13:43:57.422050] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.100 [2024-10-01 13:43:57.422083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.100 [2024-10-01 13:43:57.422101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.100 [2024-10-01 13:43:57.422154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.100 [2024-10-01 13:43:57.422223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.100 [2024-10-01 13:43:57.422244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.100 [2024-10-01 13:43:57.422258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.100 [2024-10-01 13:43:57.422294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.100 [2024-10-01 13:43:57.432203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.100 [2024-10-01 13:43:57.432332] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.100 [2024-10-01 13:43:57.432366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.100 [2024-10-01 13:43:57.432384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.100 [2024-10-01 13:43:57.432423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.100 [2024-10-01 13:43:57.432460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.100 [2024-10-01 13:43:57.432477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.100 [2024-10-01 13:43:57.432492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.100 [2024-10-01 13:43:57.432527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.100 [2024-10-01 13:43:57.443364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.100 [2024-10-01 13:43:57.443496] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.100 [2024-10-01 13:43:57.443530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.100 [2024-10-01 13:43:57.443566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.100 [2024-10-01 13:43:57.443606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.100 [2024-10-01 13:43:57.443660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.100 [2024-10-01 13:43:57.443682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.100 [2024-10-01 13:43:57.443697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.100 [2024-10-01 13:43:57.443734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.100 [2024-10-01 13:43:57.453681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.100 [2024-10-01 13:43:57.453806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.100 [2024-10-01 13:43:57.453839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.100 [2024-10-01 13:43:57.453857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.100 [2024-10-01 13:43:57.453895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.100 [2024-10-01 13:43:57.454837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.100 [2024-10-01 13:43:57.454876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.100 [2024-10-01 13:43:57.454895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.100 [2024-10-01 13:43:57.455124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.100 [2024-10-01 13:43:57.464672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.100 [2024-10-01 13:43:57.464795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.100 [2024-10-01 13:43:57.464828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.100 [2024-10-01 13:43:57.464846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.100 [2024-10-01 13:43:57.464884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.100 [2024-10-01 13:43:57.464921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.100 [2024-10-01 13:43:57.464939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.100 [2024-10-01 13:43:57.464953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.100 [2024-10-01 13:43:57.464989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.100 [2024-10-01 13:43:57.475163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.100 [2024-10-01 13:43:57.475289] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.100 [2024-10-01 13:43:57.475322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.100 [2024-10-01 13:43:57.475340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.100 [2024-10-01 13:43:57.475378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.100 [2024-10-01 13:43:57.475415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.100 [2024-10-01 13:43:57.475433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.100 [2024-10-01 13:43:57.475448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.100 [2024-10-01 13:43:57.475484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.100 [2024-10-01 13:43:57.486328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.100 [2024-10-01 13:43:57.486625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.100 [2024-10-01 13:43:57.486670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.100 [2024-10-01 13:43:57.486692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.100 [2024-10-01 13:43:57.486756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.100 [2024-10-01 13:43:57.486799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.100 [2024-10-01 13:43:57.486817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.100 [2024-10-01 13:43:57.486832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.100 [2024-10-01 13:43:57.486870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.100 [2024-10-01 13:43:57.496672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.100 [2024-10-01 13:43:57.496800] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.100 [2024-10-01 13:43:57.496834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.100 [2024-10-01 13:43:57.496883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.100 [2024-10-01 13:43:57.497848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.100 [2024-10-01 13:43:57.498073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.100 [2024-10-01 13:43:57.498110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.100 [2024-10-01 13:43:57.498128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.100 [2024-10-01 13:43:57.498213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.100 [2024-10-01 13:43:57.507509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.100 [2024-10-01 13:43:57.507671] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.100 [2024-10-01 13:43:57.507713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.100 [2024-10-01 13:43:57.507732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.100 [2024-10-01 13:43:57.507771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.100 [2024-10-01 13:43:57.507808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.100 [2024-10-01 13:43:57.507825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.101 [2024-10-01 13:43:57.507841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.101 [2024-10-01 13:43:57.507893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.101 [2024-10-01 13:43:57.514264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.514312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.514359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.514391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.514421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.514452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.514483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.514555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.514590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.514622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.514654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.514685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.514716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.514747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.514780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.514811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.514841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.514872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.514903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.514934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.514977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.514993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.515008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.515040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.515071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.101 [2024-10-01 13:43:57.515101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.101 [2024-10-01 13:43:57.515637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.101 [2024-10-01 13:43:57.515652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.515668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.515683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.515699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.515713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.515730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.515745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.515770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.515786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.515803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.515817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.515834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.515849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.515865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.515897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.515914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.102 [2024-10-01 13:43:57.515930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.515946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.102 [2024-10-01 13:43:57.515961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.515977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.102 [2024-10-01 13:43:57.515992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.102 [2024-10-01 13:43:57.516023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.102 [2024-10-01 13:43:57.516054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.102 [2024-10-01 13:43:57.516087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.102 [2024-10-01 13:43:57.516118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.102 [2024-10-01 13:43:57.516149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.102 [2024-10-01 13:43:57.516694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.102 [2024-10-01 13:43:57.516725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.102 [2024-10-01 13:43:57.516756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.102 [2024-10-01 13:43:57.516787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.102 [2024-10-01 13:43:57.516818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.102 [2024-10-01 13:43:57.516849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.102 [2024-10-01 13:43:57.516865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.516879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.516895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.516909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.516926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.516940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.516956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.516971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.516987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.103 [2024-10-01 13:43:57.517617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.517648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.517680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.517710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.517741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.517771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.517801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.517832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.517880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.517912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.517943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.517975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.517991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.518006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.518023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.518037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.518053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.518068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.518084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.103 [2024-10-01 13:43:57.518099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.518114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb42020 is same with the state(6) to be set 00:16:17.103 [2024-10-01 13:43:57.518131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.103 [2024-10-01 13:43:57.518143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.103 [2024-10-01 13:43:57.518155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62224 len:8 PRP1 0x0 PRP2 0x0 00:16:17.103 [2024-10-01 13:43:57.518169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.103 [2024-10-01 13:43:57.518185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.104 [2024-10-01 13:43:57.518195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.104 [2024-10-01 13:43:57.518206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62776 len:8 PRP1 0x0 PRP2 0x0 00:16:17.104 [2024-10-01 13:43:57.518220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.104 [2024-10-01 13:43:57.518235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.104 [2024-10-01 13:43:57.518245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.104 [2024-10-01 13:43:57.518257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62784 len:8 PRP1 0x0 PRP2 0x0 00:16:17.104 [2024-10-01 13:43:57.518278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.104 [2024-10-01 13:43:57.518294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.104 [2024-10-01 13:43:57.518304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.104 [2024-10-01 13:43:57.518315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62792 len:8 PRP1 0x0 PRP2 0x0 00:16:17.104 [2024-10-01 13:43:57.518329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.104 [2024-10-01 13:43:57.518346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.104 [2024-10-01 13:43:57.518357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.104 [2024-10-01 13:43:57.518368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62800 len:8 PRP1 0x0 PRP2 0x0 00:16:17.104 [2024-10-01 13:43:57.518381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.104 [2024-10-01 13:43:57.518396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.104 [2024-10-01 13:43:57.518406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.104 [2024-10-01 13:43:57.518417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62808 len:8 PRP1 0x0 PRP2 0x0 00:16:17.104 [2024-10-01 13:43:57.518430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.104 [2024-10-01 13:43:57.518445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.104 [2024-10-01 13:43:57.518455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.104 [2024-10-01 13:43:57.518466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62816 len:8 PRP1 0x0 PRP2 0x0 00:16:17.104 [2024-10-01 13:43:57.518479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.104 [2024-10-01 13:43:57.518494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.104 [2024-10-01 13:43:57.518504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.104 [2024-10-01 13:43:57.518515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62824 len:8 PRP1 0x0 PRP2 0x0 00:16:17.104 [2024-10-01 13:43:57.518528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.104 [2024-10-01 13:43:57.518561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.104 [2024-10-01 13:43:57.518573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.104 [2024-10-01 13:43:57.518584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62832 len:8 PRP1 0x0 PRP2 0x0 00:16:17.104 [2024-10-01 13:43:57.518598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.104 [2024-10-01 13:43:57.518613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.104 [2024-10-01 13:43:57.518623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.104 [2024-10-01 13:43:57.518634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62840 len:8 PRP1 0x0 PRP2 0x0 00:16:17.104 [2024-10-01 13:43:57.518647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.104 [2024-10-01 13:43:57.518662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.104 [2024-10-01 13:43:57.518680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.104 [2024-10-01 13:43:57.518692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62848 len:8 PRP1 0x0 PRP2 0x0 00:16:17.104 [2024-10-01 13:43:57.518706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.104 [2024-10-01 13:43:57.518721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.104 [2024-10-01 13:43:57.518732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.104 [2024-10-01 13:43:57.518742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62856 len:8 PRP1 0x0 PRP2 0x0 00:16:17.104 [2024-10-01 13:43:57.518756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.104 [2024-10-01 13:43:57.518773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.104 [2024-10-01 13:43:57.518785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.104 [2024-10-01 13:43:57.518795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62864 len:8 PRP1 0x0 PRP2 0x0 00:16:17.104 [2024-10-01 13:43:57.518809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.104 [2024-10-01 13:43:57.518857] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb42020 was disconnected and freed. reset controller. 00:16:17.104 [2024-10-01 13:43:57.519986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.104 [2024-10-01 13:43:57.520060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.104 [2024-10-01 13:43:57.520083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.104 [2024-10-01 13:43:57.520107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.104 [2024-10-01 13:43:57.520303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.104 [2024-10-01 13:43:57.520531] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.104 [2024-10-01 13:43:57.520578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.104 [2024-10-01 13:43:57.520597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.104 [2024-10-01 13:43:57.520650] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.104 [2024-10-01 13:43:57.520675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.104 [2024-10-01 13:43:57.520691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.104 [2024-10-01 13:43:57.520762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.104 [2024-10-01 13:43:57.520791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.104 [2024-10-01 13:43:57.520833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.104 [2024-10-01 13:43:57.520853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.104 [2024-10-01 13:43:57.520869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.104 [2024-10-01 13:43:57.520886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.104 [2024-10-01 13:43:57.520902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.104 [2024-10-01 13:43:57.520930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.104 [2024-10-01 13:43:57.520966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.104 [2024-10-01 13:43:57.520987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.104 [2024-10-01 13:43:57.530400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.104 [2024-10-01 13:43:57.530466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.104 [2024-10-01 13:43:57.530568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.104 [2024-10-01 13:43:57.530600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.104 [2024-10-01 13:43:57.530618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.104 [2024-10-01 13:43:57.530687] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.104 [2024-10-01 13:43:57.530716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.104 [2024-10-01 13:43:57.530733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.104 [2024-10-01 13:43:57.530753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.104 [2024-10-01 13:43:57.530787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.104 [2024-10-01 13:43:57.530808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.104 [2024-10-01 13:43:57.530823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.104 [2024-10-01 13:43:57.530837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.105 [2024-10-01 13:43:57.532099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.105 [2024-10-01 13:43:57.532130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.105 [2024-10-01 13:43:57.532153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.105 [2024-10-01 13:43:57.532167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.105 [2024-10-01 13:43:57.532396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.105 [2024-10-01 13:43:57.540488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.105 [2024-10-01 13:43:57.540639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.105 [2024-10-01 13:43:57.540687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.105 [2024-10-01 13:43:57.540709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.105 [2024-10-01 13:43:57.541575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.105 [2024-10-01 13:43:57.541820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.105 [2024-10-01 13:43:57.541874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.105 [2024-10-01 13:43:57.541896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.105 [2024-10-01 13:43:57.541910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.105 [2024-10-01 13:43:57.542949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.105 [2024-10-01 13:43:57.543041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.105 [2024-10-01 13:43:57.543072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.105 [2024-10-01 13:43:57.543091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.105 [2024-10-01 13:43:57.543732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.105 [2024-10-01 13:43:57.543845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.105 [2024-10-01 13:43:57.543891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.105 [2024-10-01 13:43:57.543912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.105 [2024-10-01 13:43:57.543949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.105 [2024-10-01 13:43:57.550968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.105 [2024-10-01 13:43:57.551094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.105 [2024-10-01 13:43:57.551128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.105 [2024-10-01 13:43:57.551147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.105 [2024-10-01 13:43:57.551181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.105 [2024-10-01 13:43:57.551213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.105 [2024-10-01 13:43:57.551231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.105 [2024-10-01 13:43:57.551245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.105 [2024-10-01 13:43:57.551277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.105 [2024-10-01 13:43:57.554022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.105 [2024-10-01 13:43:57.554145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.105 [2024-10-01 13:43:57.554179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.105 [2024-10-01 13:43:57.554198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.105 [2024-10-01 13:43:57.554231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.105 [2024-10-01 13:43:57.554264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.105 [2024-10-01 13:43:57.554282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.105 [2024-10-01 13:43:57.554296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.105 [2024-10-01 13:43:57.554329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.105 [2024-10-01 13:43:57.561428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.105 [2024-10-01 13:43:57.561576] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.105 [2024-10-01 13:43:57.561614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.105 [2024-10-01 13:43:57.561633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.105 [2024-10-01 13:43:57.562612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.105 [2024-10-01 13:43:57.562855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.105 [2024-10-01 13:43:57.562894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.105 [2024-10-01 13:43:57.562912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.105 [2024-10-01 13:43:57.562993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.105 [2024-10-01 13:43:57.565432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.105 [2024-10-01 13:43:57.565587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.105 [2024-10-01 13:43:57.565625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.105 [2024-10-01 13:43:57.565644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.105 [2024-10-01 13:43:57.565698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.105 [2024-10-01 13:43:57.565737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.105 [2024-10-01 13:43:57.565756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.105 [2024-10-01 13:43:57.565770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.105 [2024-10-01 13:43:57.565803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.105 [2024-10-01 13:43:57.572363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.105 [2024-10-01 13:43:57.572487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.105 [2024-10-01 13:43:57.572521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.105 [2024-10-01 13:43:57.572555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.105 [2024-10-01 13:43:57.572592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.105 [2024-10-01 13:43:57.572624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.105 [2024-10-01 13:43:57.572642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.105 [2024-10-01 13:43:57.572657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.106 [2024-10-01 13:43:57.572701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.106 [2024-10-01 13:43:57.575662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.106 [2024-10-01 13:43:57.575781] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.106 [2024-10-01 13:43:57.575814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.106 [2024-10-01 13:43:57.575832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.106 [2024-10-01 13:43:57.575866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.106 [2024-10-01 13:43:57.576810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.106 [2024-10-01 13:43:57.576851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.106 [2024-10-01 13:43:57.576891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.106 [2024-10-01 13:43:57.577097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.106 [2024-10-01 13:43:57.582588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.106 [2024-10-01 13:43:57.582708] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.106 [2024-10-01 13:43:57.582742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.106 [2024-10-01 13:43:57.582760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.106 [2024-10-01 13:43:57.582806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.106 [2024-10-01 13:43:57.582841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.106 [2024-10-01 13:43:57.582858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.106 [2024-10-01 13:43:57.582872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.106 [2024-10-01 13:43:57.582904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.106 [2024-10-01 13:43:57.586475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.106 [2024-10-01 13:43:57.586614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.106 [2024-10-01 13:43:57.586655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.106 [2024-10-01 13:43:57.586673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.106 [2024-10-01 13:43:57.586723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.106 [2024-10-01 13:43:57.586760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.106 [2024-10-01 13:43:57.586779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.106 [2024-10-01 13:43:57.586794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.106 [2024-10-01 13:43:57.586826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.106 [2024-10-01 13:43:57.593712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.106 [2024-10-01 13:43:57.593836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.106 [2024-10-01 13:43:57.593870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.106 [2024-10-01 13:43:57.593888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.106 [2024-10-01 13:43:57.593922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.106 [2024-10-01 13:43:57.593955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.106 [2024-10-01 13:43:57.593972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.106 [2024-10-01 13:43:57.593987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.106 [2024-10-01 13:43:57.594019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.106 [2024-10-01 13:43:57.596690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.106 [2024-10-01 13:43:57.596809] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.106 [2024-10-01 13:43:57.596857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.106 [2024-10-01 13:43:57.596878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.106 [2024-10-01 13:43:57.596913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.106 [2024-10-01 13:43:57.596945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.106 [2024-10-01 13:43:57.596963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.106 [2024-10-01 13:43:57.596977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.106 [2024-10-01 13:43:57.597009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.106 [2024-10-01 13:43:57.604124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.106 [2024-10-01 13:43:57.604244] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.106 [2024-10-01 13:43:57.604278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.106 [2024-10-01 13:43:57.604296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.106 [2024-10-01 13:43:57.604330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.106 [2024-10-01 13:43:57.604369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.106 [2024-10-01 13:43:57.604387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.106 [2024-10-01 13:43:57.604401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.106 [2024-10-01 13:43:57.605329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.106 [2024-10-01 13:43:57.608113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.106 [2024-10-01 13:43:57.608261] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.106 [2024-10-01 13:43:57.608294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.106 [2024-10-01 13:43:57.608313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.106 [2024-10-01 13:43:57.608348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.106 [2024-10-01 13:43:57.608381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.106 [2024-10-01 13:43:57.608398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.106 [2024-10-01 13:43:57.608413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.106 [2024-10-01 13:43:57.608445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.106 [2024-10-01 13:43:57.615115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.106 [2024-10-01 13:43:57.615238] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.106 [2024-10-01 13:43:57.615272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.106 [2024-10-01 13:43:57.615291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.106 [2024-10-01 13:43:57.615325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.106 [2024-10-01 13:43:57.615378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.106 [2024-10-01 13:43:57.615398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.106 [2024-10-01 13:43:57.615412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.106 [2024-10-01 13:43:57.615445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.106 [2024-10-01 13:43:57.618405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.106 [2024-10-01 13:43:57.618528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.106 [2024-10-01 13:43:57.618603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.106 [2024-10-01 13:43:57.618624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.106 [2024-10-01 13:43:57.618659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.106 [2024-10-01 13:43:57.618692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.106 [2024-10-01 13:43:57.618709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.106 [2024-10-01 13:43:57.618724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.106 [2024-10-01 13:43:57.619653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.106 [2024-10-01 13:43:57.625386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.106 [2024-10-01 13:43:57.625509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.106 [2024-10-01 13:43:57.625563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.106 [2024-10-01 13:43:57.625598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.106 [2024-10-01 13:43:57.625639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.106 [2024-10-01 13:43:57.625672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.106 [2024-10-01 13:43:57.625690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.106 [2024-10-01 13:43:57.625704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.106 [2024-10-01 13:43:57.625736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.107 [2024-10-01 13:43:57.629449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.107 [2024-10-01 13:43:57.629611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.107 [2024-10-01 13:43:57.629647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.107 [2024-10-01 13:43:57.629667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.107 [2024-10-01 13:43:57.629705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.107 [2024-10-01 13:43:57.629739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.107 [2024-10-01 13:43:57.629758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.107 [2024-10-01 13:43:57.629772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.107 [2024-10-01 13:43:57.629827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.107 [2024-10-01 13:43:57.636757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.107 [2024-10-01 13:43:57.636919] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.107 [2024-10-01 13:43:57.636953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.107 [2024-10-01 13:43:57.636972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.107 [2024-10-01 13:43:57.637007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.107 [2024-10-01 13:43:57.637040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.107 [2024-10-01 13:43:57.637058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.107 [2024-10-01 13:43:57.637073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.107 [2024-10-01 13:43:57.637105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.107 [2024-10-01 13:43:57.639755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.107 [2024-10-01 13:43:57.639888] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.107 [2024-10-01 13:43:57.639922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.107 [2024-10-01 13:43:57.639940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.107 [2024-10-01 13:43:57.639975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.107 [2024-10-01 13:43:57.640008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.107 [2024-10-01 13:43:57.640026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.107 [2024-10-01 13:43:57.640041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.107 [2024-10-01 13:43:57.640072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.107 [2024-10-01 13:43:57.647079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.107 [2024-10-01 13:43:57.647202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.107 [2024-10-01 13:43:57.647235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.107 [2024-10-01 13:43:57.647254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.107 [2024-10-01 13:43:57.647288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.107 [2024-10-01 13:43:57.648237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.107 [2024-10-01 13:43:57.648278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.107 [2024-10-01 13:43:57.648297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.107 [2024-10-01 13:43:57.648499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.107 [2024-10-01 13:43:57.651074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.107 [2024-10-01 13:43:57.651195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.107 [2024-10-01 13:43:57.651234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.107 [2024-10-01 13:43:57.651281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.107 [2024-10-01 13:43:57.651318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.107 [2024-10-01 13:43:57.651351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.107 [2024-10-01 13:43:57.651369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.107 [2024-10-01 13:43:57.651383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.107 [2024-10-01 13:43:57.651416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.107 [2024-10-01 13:43:57.658084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.107 [2024-10-01 13:43:57.658209] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.107 [2024-10-01 13:43:57.658250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.107 [2024-10-01 13:43:57.658269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.107 [2024-10-01 13:43:57.658304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.107 [2024-10-01 13:43:57.658336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.107 [2024-10-01 13:43:57.658354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.107 [2024-10-01 13:43:57.658368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.107 [2024-10-01 13:43:57.658400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.107 [2024-10-01 13:43:57.661405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.107 [2024-10-01 13:43:57.661521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.107 [2024-10-01 13:43:57.661580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.107 [2024-10-01 13:43:57.661608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.107 [2024-10-01 13:43:57.662526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.107 [2024-10-01 13:43:57.662763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.107 [2024-10-01 13:43:57.662793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.107 [2024-10-01 13:43:57.662809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.107 [2024-10-01 13:43:57.662888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.107 8349.60 IOPS, 32.62 MiB/s [2024-10-01 13:43:57.669019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.107 [2024-10-01 13:43:57.669142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.107 [2024-10-01 13:43:57.669175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.107 [2024-10-01 13:43:57.669193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.107 [2024-10-01 13:43:57.669227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.107 [2024-10-01 13:43:57.669260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.107 [2024-10-01 13:43:57.669294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.107 [2024-10-01 13:43:57.669310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.107 [2024-10-01 13:43:57.669343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.107 [2024-10-01 13:43:57.672328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.107 [2024-10-01 13:43:57.672457] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.107 [2024-10-01 13:43:57.672490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.107 [2024-10-01 13:43:57.672508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.107 [2024-10-01 13:43:57.672555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.107 [2024-10-01 13:43:57.672601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.108 [2024-10-01 13:43:57.672631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.108 [2024-10-01 13:43:57.672647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.108 [2024-10-01 13:43:57.672681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.108 [2024-10-01 13:43:57.679703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.108 [2024-10-01 13:43:57.679823] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.108 [2024-10-01 13:43:57.679856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.108 [2024-10-01 13:43:57.679888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.108 [2024-10-01 13:43:57.679926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.108 [2024-10-01 13:43:57.679959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.108 [2024-10-01 13:43:57.679976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.108 [2024-10-01 13:43:57.679991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.108 [2024-10-01 13:43:57.680023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.108 [2024-10-01 13:43:57.682678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.108 [2024-10-01 13:43:57.682799] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.108 [2024-10-01 13:43:57.682832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.108 [2024-10-01 13:43:57.682850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.108 [2024-10-01 13:43:57.682884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.108 [2024-10-01 13:43:57.682917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.108 [2024-10-01 13:43:57.682935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.108 [2024-10-01 13:43:57.682950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.108 [2024-10-01 13:43:57.682981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.108 [2024-10-01 13:43:57.689990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.108 [2024-10-01 13:43:57.690112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.108 [2024-10-01 13:43:57.690145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.108 [2024-10-01 13:43:57.690163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.108 [2024-10-01 13:43:57.690197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.108 [2024-10-01 13:43:57.691128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.108 [2024-10-01 13:43:57.691168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.108 [2024-10-01 13:43:57.691187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.108 [2024-10-01 13:43:57.691400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.108 [2024-10-01 13:43:57.694038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.108 [2024-10-01 13:43:57.694169] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.108 [2024-10-01 13:43:57.694203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.108 [2024-10-01 13:43:57.694222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.108 [2024-10-01 13:43:57.694256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.108 [2024-10-01 13:43:57.694289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.108 [2024-10-01 13:43:57.694307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.108 [2024-10-01 13:43:57.694321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.108 [2024-10-01 13:43:57.694353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.108 [2024-10-01 13:43:57.701145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.108 [2024-10-01 13:43:57.701267] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.108 [2024-10-01 13:43:57.701301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.108 [2024-10-01 13:43:57.701319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.108 [2024-10-01 13:43:57.701353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.108 [2024-10-01 13:43:57.701386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.108 [2024-10-01 13:43:57.701403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.108 [2024-10-01 13:43:57.701417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.108 [2024-10-01 13:43:57.701450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.108 [2024-10-01 13:43:57.704414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.108 [2024-10-01 13:43:57.704531] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.108 [2024-10-01 13:43:57.704591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.108 [2024-10-01 13:43:57.704623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.108 [2024-10-01 13:43:57.704693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.108 [2024-10-01 13:43:57.705633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.108 [2024-10-01 13:43:57.705672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.108 [2024-10-01 13:43:57.705690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.108 [2024-10-01 13:43:57.705895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.108 [2024-10-01 13:43:57.711487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.108 [2024-10-01 13:43:57.711623] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.108 [2024-10-01 13:43:57.711656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.108 [2024-10-01 13:43:57.711675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.108 [2024-10-01 13:43:57.711709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.108 [2024-10-01 13:43:57.711741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.108 [2024-10-01 13:43:57.711759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.108 [2024-10-01 13:43:57.711773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.108 [2024-10-01 13:43:57.711806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.108 [2024-10-01 13:43:57.715482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.108 [2024-10-01 13:43:57.715617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.108 [2024-10-01 13:43:57.715650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.108 [2024-10-01 13:43:57.715669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.108 [2024-10-01 13:43:57.715703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.108 [2024-10-01 13:43:57.715736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.108 [2024-10-01 13:43:57.715759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.108 [2024-10-01 13:43:57.715773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.108 [2024-10-01 13:43:57.715806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.108 [2024-10-01 13:43:57.721983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.108 [2024-10-01 13:43:57.722844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.108 [2024-10-01 13:43:57.722891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.108 [2024-10-01 13:43:57.722913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.108 [2024-10-01 13:43:57.723091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.108 [2024-10-01 13:43:57.723140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.108 [2024-10-01 13:43:57.723160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.108 [2024-10-01 13:43:57.723191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.108 [2024-10-01 13:43:57.723228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.108 [2024-10-01 13:43:57.725972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.108 [2024-10-01 13:43:57.726092] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.108 [2024-10-01 13:43:57.726125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.108 [2024-10-01 13:43:57.726144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.108 [2024-10-01 13:43:57.726177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.108 [2024-10-01 13:43:57.726210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.108 [2024-10-01 13:43:57.726228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.108 [2024-10-01 13:43:57.726242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.109 [2024-10-01 13:43:57.726274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.109 [2024-10-01 13:43:57.733435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.109 [2024-10-01 13:43:57.733594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.109 [2024-10-01 13:43:57.733632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.109 [2024-10-01 13:43:57.733652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.109 [2024-10-01 13:43:57.734604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.109 [2024-10-01 13:43:57.734848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.109 [2024-10-01 13:43:57.734886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.109 [2024-10-01 13:43:57.734904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.109 [2024-10-01 13:43:57.734987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.109 [2024-10-01 13:43:57.737382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.109 [2024-10-01 13:43:57.737527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.109 [2024-10-01 13:43:57.737591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.109 [2024-10-01 13:43:57.737624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.109 [2024-10-01 13:43:57.737710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.109 [2024-10-01 13:43:57.737768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.109 [2024-10-01 13:43:57.737799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.109 [2024-10-01 13:43:57.737823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.109 [2024-10-01 13:43:57.737869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.109 [2024-10-01 13:43:57.744630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.109 [2024-10-01 13:43:57.744791] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.109 [2024-10-01 13:43:57.744828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.109 [2024-10-01 13:43:57.744848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.109 [2024-10-01 13:43:57.744883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.109 [2024-10-01 13:43:57.744916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.109 [2024-10-01 13:43:57.744933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.109 [2024-10-01 13:43:57.744948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.109 [2024-10-01 13:43:57.744981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.109 [2024-10-01 13:43:57.747959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.109 [2024-10-01 13:43:57.748081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.109 [2024-10-01 13:43:57.748115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.109 [2024-10-01 13:43:57.748133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.109 [2024-10-01 13:43:57.749080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.109 [2024-10-01 13:43:57.749312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.109 [2024-10-01 13:43:57.749357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.109 [2024-10-01 13:43:57.749376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.109 [2024-10-01 13:43:57.749457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.109 [2024-10-01 13:43:57.754894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.109 [2024-10-01 13:43:57.755017] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.109 [2024-10-01 13:43:57.755050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.109 [2024-10-01 13:43:57.755068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.109 [2024-10-01 13:43:57.755102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.109 [2024-10-01 13:43:57.755135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.109 [2024-10-01 13:43:57.755152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.109 [2024-10-01 13:43:57.755169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.109 [2024-10-01 13:43:57.755201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.109 [2024-10-01 13:43:57.758949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.109 [2024-10-01 13:43:57.759072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.109 [2024-10-01 13:43:57.759106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.109 [2024-10-01 13:43:57.759124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.109 [2024-10-01 13:43:57.759158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.109 [2024-10-01 13:43:57.759209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.109 [2024-10-01 13:43:57.759228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.109 [2024-10-01 13:43:57.759243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.109 [2024-10-01 13:43:57.759276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.109 [2024-10-01 13:43:57.766171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.109 [2024-10-01 13:43:57.766309] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.109 [2024-10-01 13:43:57.766343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.109 [2024-10-01 13:43:57.766362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.109 [2024-10-01 13:43:57.766413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.109 [2024-10-01 13:43:57.766451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.109 [2024-10-01 13:43:57.766470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.109 [2024-10-01 13:43:57.766484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.109 [2024-10-01 13:43:57.766517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.109 [2024-10-01 13:43:57.769197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.109 [2024-10-01 13:43:57.769326] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.109 [2024-10-01 13:43:57.769359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.109 [2024-10-01 13:43:57.769378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.109 [2024-10-01 13:43:57.769412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.109 [2024-10-01 13:43:57.769445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.109 [2024-10-01 13:43:57.769463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.109 [2024-10-01 13:43:57.769478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.109 [2024-10-01 13:43:57.769510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.109 [2024-10-01 13:43:57.776595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.109 [2024-10-01 13:43:57.776723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.109 [2024-10-01 13:43:57.776757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.109 [2024-10-01 13:43:57.776776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.109 [2024-10-01 13:43:57.776810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.109 [2024-10-01 13:43:57.777758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.109 [2024-10-01 13:43:57.777800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.109 [2024-10-01 13:43:57.777818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.109 [2024-10-01 13:43:57.778060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.109 [2024-10-01 13:43:57.780572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.109 [2024-10-01 13:43:57.780701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.109 [2024-10-01 13:43:57.780735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.109 [2024-10-01 13:43:57.780754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.109 [2024-10-01 13:43:57.780788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.109 [2024-10-01 13:43:57.780820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.109 [2024-10-01 13:43:57.780838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.109 [2024-10-01 13:43:57.780853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.109 [2024-10-01 13:43:57.780885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.109 [2024-10-01 13:43:57.787495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.109 [2024-10-01 13:43:57.787630] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.110 [2024-10-01 13:43:57.787665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.110 [2024-10-01 13:43:57.787684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.110 [2024-10-01 13:43:57.787735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.110 [2024-10-01 13:43:57.787774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.110 [2024-10-01 13:43:57.787793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.110 [2024-10-01 13:43:57.787807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.110 [2024-10-01 13:43:57.787839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.110 [2024-10-01 13:43:57.790825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.110 [2024-10-01 13:43:57.790946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.110 [2024-10-01 13:43:57.790978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.110 [2024-10-01 13:43:57.790997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.110 [2024-10-01 13:43:57.791031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.110 [2024-10-01 13:43:57.791976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.110 [2024-10-01 13:43:57.792016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.110 [2024-10-01 13:43:57.792035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.110 [2024-10-01 13:43:57.792259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.110 [2024-10-01 13:43:57.797777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.110 [2024-10-01 13:43:57.797903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.110 [2024-10-01 13:43:57.797936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.110 [2024-10-01 13:43:57.797977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.110 [2024-10-01 13:43:57.798013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.110 [2024-10-01 13:43:57.798046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.110 [2024-10-01 13:43:57.798064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.110 [2024-10-01 13:43:57.798079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.110 [2024-10-01 13:43:57.798111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.110 [2024-10-01 13:43:57.801782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.110 [2024-10-01 13:43:57.801904] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.110 [2024-10-01 13:43:57.801937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.110 [2024-10-01 13:43:57.801955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.110 [2024-10-01 13:43:57.801990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.110 [2024-10-01 13:43:57.802022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.110 [2024-10-01 13:43:57.802041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.110 [2024-10-01 13:43:57.802055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.110 [2024-10-01 13:43:57.802087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.110 [2024-10-01 13:43:57.809648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.110 [2024-10-01 13:43:57.809947] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.110 [2024-10-01 13:43:57.809993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.110 [2024-10-01 13:43:57.810014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.110 [2024-10-01 13:43:57.810058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.110 [2024-10-01 13:43:57.810093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.110 [2024-10-01 13:43:57.810111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.110 [2024-10-01 13:43:57.810126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.110 [2024-10-01 13:43:57.810159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.110 [2024-10-01 13:43:57.812882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.110 [2024-10-01 13:43:57.813002] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.110 [2024-10-01 13:43:57.813044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.110 [2024-10-01 13:43:57.813065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.110 [2024-10-01 13:43:57.813099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.110 [2024-10-01 13:43:57.813132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.110 [2024-10-01 13:43:57.813169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.110 [2024-10-01 13:43:57.813185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.110 [2024-10-01 13:43:57.813219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.110 [2024-10-01 13:43:57.820885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.110 [2024-10-01 13:43:57.821110] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.110 [2024-10-01 13:43:57.821150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.110 [2024-10-01 13:43:57.821169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.110 [2024-10-01 13:43:57.822148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.110 [2024-10-01 13:43:57.822426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.110 [2024-10-01 13:43:57.822466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.110 [2024-10-01 13:43:57.822487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.110 [2024-10-01 13:43:57.822590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.110 [2024-10-01 13:43:57.824983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.110 [2024-10-01 13:43:57.825152] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.110 [2024-10-01 13:43:57.825188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.110 [2024-10-01 13:43:57.825219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.110 [2024-10-01 13:43:57.825256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.110 [2024-10-01 13:43:57.825305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.110 [2024-10-01 13:43:57.825325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.110 [2024-10-01 13:43:57.825341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.110 [2024-10-01 13:43:57.825375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.110 [2024-10-01 13:43:57.832249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.110 [2024-10-01 13:43:57.832424] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.110 [2024-10-01 13:43:57.832461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.110 [2024-10-01 13:43:57.832481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.110 [2024-10-01 13:43:57.832517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.110 [2024-10-01 13:43:57.832567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.110 [2024-10-01 13:43:57.832587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.110 [2024-10-01 13:43:57.832603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.110 [2024-10-01 13:43:57.832637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.110 [2024-10-01 13:43:57.835530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.110 [2024-10-01 13:43:57.835683] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.110 [2024-10-01 13:43:57.835717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.110 [2024-10-01 13:43:57.835735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.110 [2024-10-01 13:43:57.835769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.110 [2024-10-01 13:43:57.836716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.110 [2024-10-01 13:43:57.836757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.110 [2024-10-01 13:43:57.836775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.110 [2024-10-01 13:43:57.836993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.110 [2024-10-01 13:43:57.842515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.110 [2024-10-01 13:43:57.842657] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.110 [2024-10-01 13:43:57.842701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.110 [2024-10-01 13:43:57.842723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.111 [2024-10-01 13:43:57.842757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.111 [2024-10-01 13:43:57.842790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.111 [2024-10-01 13:43:57.842808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.111 [2024-10-01 13:43:57.842822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.111 [2024-10-01 13:43:57.842855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.111 [2024-10-01 13:43:57.846589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.111 [2024-10-01 13:43:57.846710] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.111 [2024-10-01 13:43:57.846743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.111 [2024-10-01 13:43:57.846761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.111 [2024-10-01 13:43:57.846794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.111 [2024-10-01 13:43:57.846837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.111 [2024-10-01 13:43:57.846855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.111 [2024-10-01 13:43:57.846870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.111 [2024-10-01 13:43:57.846902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.111 [2024-10-01 13:43:57.853025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.111 [2024-10-01 13:43:57.853896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.111 [2024-10-01 13:43:57.853943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.111 [2024-10-01 13:43:57.853965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.111 [2024-10-01 13:43:57.854173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.111 [2024-10-01 13:43:57.854234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.111 [2024-10-01 13:43:57.854256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.111 [2024-10-01 13:43:57.854271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.111 [2024-10-01 13:43:57.854306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.111 [2024-10-01 13:43:57.856942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.111 [2024-10-01 13:43:57.857062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.111 [2024-10-01 13:43:57.857095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.111 [2024-10-01 13:43:57.857113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.111 [2024-10-01 13:43:57.857147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.111 [2024-10-01 13:43:57.857180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.111 [2024-10-01 13:43:57.857197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.111 [2024-10-01 13:43:57.857212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.111 [2024-10-01 13:43:57.857243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.111 [2024-10-01 13:43:57.864233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.111 [2024-10-01 13:43:57.864364] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.111 [2024-10-01 13:43:57.864397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.111 [2024-10-01 13:43:57.864415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.111 [2024-10-01 13:43:57.864465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.111 [2024-10-01 13:43:57.864502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.111 [2024-10-01 13:43:57.864520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.111 [2024-10-01 13:43:57.864549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.111 [2024-10-01 13:43:57.865467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.111 [2024-10-01 13:43:57.868119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.111 [2024-10-01 13:43:57.868382] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.111 [2024-10-01 13:43:57.868427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.111 [2024-10-01 13:43:57.868448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.111 [2024-10-01 13:43:57.868508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.111 [2024-10-01 13:43:57.868562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.111 [2024-10-01 13:43:57.868584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.111 [2024-10-01 13:43:57.868621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.111 [2024-10-01 13:43:57.868657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.111 [2024-10-01 13:43:57.875278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.111 [2024-10-01 13:43:57.875402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.111 [2024-10-01 13:43:57.875436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.111 [2024-10-01 13:43:57.875455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.111 [2024-10-01 13:43:57.875489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.111 [2024-10-01 13:43:57.875521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.111 [2024-10-01 13:43:57.875554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.111 [2024-10-01 13:43:57.875572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.111 [2024-10-01 13:43:57.875606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.111 [2024-10-01 13:43:57.878599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.111 [2024-10-01 13:43:57.878720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.111 [2024-10-01 13:43:57.878753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.111 [2024-10-01 13:43:57.878771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.111 [2024-10-01 13:43:57.878804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.111 [2024-10-01 13:43:57.879738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.111 [2024-10-01 13:43:57.879777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.111 [2024-10-01 13:43:57.879795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.111 [2024-10-01 13:43:57.880011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.111 [2024-10-01 13:43:57.885514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.111 [2024-10-01 13:43:57.885665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.111 [2024-10-01 13:43:57.885700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.111 [2024-10-01 13:43:57.885719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.111 [2024-10-01 13:43:57.885752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.111 [2024-10-01 13:43:57.885785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.111 [2024-10-01 13:43:57.885802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.111 [2024-10-01 13:43:57.885817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.111 [2024-10-01 13:43:57.885850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.111 [2024-10-01 13:43:57.889626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.111 [2024-10-01 13:43:57.889771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.111 [2024-10-01 13:43:57.889805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.111 [2024-10-01 13:43:57.889824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.111 [2024-10-01 13:43:57.889858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.111 [2024-10-01 13:43:57.889891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.111 [2024-10-01 13:43:57.889909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.111 [2024-10-01 13:43:57.889924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.111 [2024-10-01 13:43:57.889956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.111 [2024-10-01 13:43:57.896913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.111 [2024-10-01 13:43:57.897045] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.111 [2024-10-01 13:43:57.897079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.111 [2024-10-01 13:43:57.897097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.111 [2024-10-01 13:43:57.897131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.111 [2024-10-01 13:43:57.897164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.111 [2024-10-01 13:43:57.897182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.112 [2024-10-01 13:43:57.897196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.112 [2024-10-01 13:43:57.897228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.112 [2024-10-01 13:43:57.899938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.112 [2024-10-01 13:43:57.900062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.112 [2024-10-01 13:43:57.900095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.112 [2024-10-01 13:43:57.900114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.112 [2024-10-01 13:43:57.900147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.112 [2024-10-01 13:43:57.900181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.112 [2024-10-01 13:43:57.900199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.112 [2024-10-01 13:43:57.900214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.112 [2024-10-01 13:43:57.900246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.112 [2024-10-01 13:43:57.907394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.112 [2024-10-01 13:43:57.907531] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.112 [2024-10-01 13:43:57.907603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.112 [2024-10-01 13:43:57.907624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.112 [2024-10-01 13:43:57.907661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.112 [2024-10-01 13:43:57.908646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.112 [2024-10-01 13:43:57.908690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.112 [2024-10-01 13:43:57.908709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.112 [2024-10-01 13:43:57.908943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.112 [2024-10-01 13:43:57.911420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.112 [2024-10-01 13:43:57.911586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.112 [2024-10-01 13:43:57.911630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.112 [2024-10-01 13:43:57.911651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.112 [2024-10-01 13:43:57.911700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.112 [2024-10-01 13:43:57.911734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.112 [2024-10-01 13:43:57.911752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.112 [2024-10-01 13:43:57.911766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.112 [2024-10-01 13:43:57.911799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.112 [2024-10-01 13:43:57.918462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.112 [2024-10-01 13:43:57.918631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.112 [2024-10-01 13:43:57.918668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.112 [2024-10-01 13:43:57.918687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.112 [2024-10-01 13:43:57.918723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.112 [2024-10-01 13:43:57.918757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.112 [2024-10-01 13:43:57.918775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.112 [2024-10-01 13:43:57.918790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.112 [2024-10-01 13:43:57.918823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.112 [2024-10-01 13:43:57.921767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.112 [2024-10-01 13:43:57.921901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.112 [2024-10-01 13:43:57.921934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.112 [2024-10-01 13:43:57.921953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.112 [2024-10-01 13:43:57.921987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.112 [2024-10-01 13:43:57.922958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.112 [2024-10-01 13:43:57.923001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.112 [2024-10-01 13:43:57.923020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.112 [2024-10-01 13:43:57.923256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.112 [2024-10-01 13:43:57.928860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.112 [2024-10-01 13:43:57.929037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.112 [2024-10-01 13:43:57.929077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.112 [2024-10-01 13:43:57.929096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.112 [2024-10-01 13:43:57.929133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.112 [2024-10-01 13:43:57.929167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.112 [2024-10-01 13:43:57.929185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.112 [2024-10-01 13:43:57.929200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.112 [2024-10-01 13:43:57.929233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.112 [2024-10-01 13:43:57.932918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.112 [2024-10-01 13:43:57.933062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.112 [2024-10-01 13:43:57.933096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.112 [2024-10-01 13:43:57.933115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.112 [2024-10-01 13:43:57.933149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.112 [2024-10-01 13:43:57.933182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.112 [2024-10-01 13:43:57.933201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.112 [2024-10-01 13:43:57.933216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.112 [2024-10-01 13:43:57.933248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.112 [2024-10-01 13:43:57.940324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.112 [2024-10-01 13:43:57.940499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.112 [2024-10-01 13:43:57.940552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.112 [2024-10-01 13:43:57.940576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.112 [2024-10-01 13:43:57.940614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.112 [2024-10-01 13:43:57.940647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.112 [2024-10-01 13:43:57.940666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.112 [2024-10-01 13:43:57.940681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.112 [2024-10-01 13:43:57.940714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.112 [2024-10-01 13:43:57.943326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.112 [2024-10-01 13:43:57.943455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.112 [2024-10-01 13:43:57.943489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.112 [2024-10-01 13:43:57.943551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.112 [2024-10-01 13:43:57.943593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.113 [2024-10-01 13:43:57.943626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.113 [2024-10-01 13:43:57.943645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.113 [2024-10-01 13:43:57.943660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.113 [2024-10-01 13:43:57.943692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.113 [2024-10-01 13:43:57.950650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.113 [2024-10-01 13:43:57.950781] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.113 [2024-10-01 13:43:57.950821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.113 [2024-10-01 13:43:57.950848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.113 [2024-10-01 13:43:57.951822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.113 [2024-10-01 13:43:57.952089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.113 [2024-10-01 13:43:57.952130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.113 [2024-10-01 13:43:57.952149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.113 [2024-10-01 13:43:57.952232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.113 [2024-10-01 13:43:57.953861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.113 [2024-10-01 13:43:57.953999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.113 [2024-10-01 13:43:57.954034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.113 [2024-10-01 13:43:57.954053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.113 [2024-10-01 13:43:57.954091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.113 [2024-10-01 13:43:57.954139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.113 [2024-10-01 13:43:57.954170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.113 [2024-10-01 13:43:57.954187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.113 [2024-10-01 13:43:57.954993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.113 [2024-10-01 13:43:57.961227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.113 [2024-10-01 13:43:57.961384] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.113 [2024-10-01 13:43:57.961419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.113 [2024-10-01 13:43:57.961439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.113 [2024-10-01 13:43:57.961473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.113 [2024-10-01 13:43:57.961506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.113 [2024-10-01 13:43:57.961571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.113 [2024-10-01 13:43:57.961589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.113 [2024-10-01 13:43:57.962515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.113 [2024-10-01 13:43:57.964194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.113 [2024-10-01 13:43:57.964322] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.113 [2024-10-01 13:43:57.964355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.113 [2024-10-01 13:43:57.964374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.113 [2024-10-01 13:43:57.964408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.113 [2024-10-01 13:43:57.964442] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.113 [2024-10-01 13:43:57.964460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.113 [2024-10-01 13:43:57.964474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.113 [2024-10-01 13:43:57.964506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.113 [2024-10-01 13:43:57.971335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.113 [2024-10-01 13:43:57.972669] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.113 [2024-10-01 13:43:57.972716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.113 [2024-10-01 13:43:57.972737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.113 [2024-10-01 13:43:57.973613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.113 [2024-10-01 13:43:57.973762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.113 [2024-10-01 13:43:57.973789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.113 [2024-10-01 13:43:57.973805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.113 [2024-10-01 13:43:57.973839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.113 [2024-10-01 13:43:57.974290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.113 [2024-10-01 13:43:57.974397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.113 [2024-10-01 13:43:57.974429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.113 [2024-10-01 13:43:57.974447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.113 [2024-10-01 13:43:57.974480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.113 [2024-10-01 13:43:57.974512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.113 [2024-10-01 13:43:57.974530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.113 [2024-10-01 13:43:57.974575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.113 [2024-10-01 13:43:57.974621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.113 [2024-10-01 13:43:57.982252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.113 [2024-10-01 13:43:57.982378] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.113 [2024-10-01 13:43:57.982414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.113 [2024-10-01 13:43:57.982432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.113 [2024-10-01 13:43:57.982466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.113 [2024-10-01 13:43:57.982499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.113 [2024-10-01 13:43:57.982516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.113 [2024-10-01 13:43:57.982531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.113 [2024-10-01 13:43:57.983800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.113 [2024-10-01 13:43:57.984977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.113 [2024-10-01 13:43:57.985094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.113 [2024-10-01 13:43:57.985127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.113 [2024-10-01 13:43:57.985145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.113 [2024-10-01 13:43:57.985179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.113 [2024-10-01 13:43:57.985214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.113 [2024-10-01 13:43:57.985233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.113 [2024-10-01 13:43:57.985247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.113 [2024-10-01 13:43:57.985279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.113 [2024-10-01 13:43:57.992796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.113 [2024-10-01 13:43:57.992935] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.113 [2024-10-01 13:43:57.992970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.113 [2024-10-01 13:43:57.992989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.113 [2024-10-01 13:43:57.993023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.113 [2024-10-01 13:43:57.993076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.113 [2024-10-01 13:43:57.993099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.113 [2024-10-01 13:43:57.993114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.113 [2024-10-01 13:43:57.993148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.113 [2024-10-01 13:43:57.995071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.113 [2024-10-01 13:43:57.995191] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.113 [2024-10-01 13:43:57.995225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.113 [2024-10-01 13:43:57.995243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.113 [2024-10-01 13:43:57.996214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.113 [2024-10-01 13:43:57.996433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.113 [2024-10-01 13:43:57.996461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.113 [2024-10-01 13:43:57.996477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.113 [2024-10-01 13:43:57.996571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.114 [2024-10-01 13:43:58.002903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.114 [2024-10-01 13:43:58.003049] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.114 [2024-10-01 13:43:58.003084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.114 [2024-10-01 13:43:58.003102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.114 [2024-10-01 13:43:58.003136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.114 [2024-10-01 13:43:58.003169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.114 [2024-10-01 13:43:58.003186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.114 [2024-10-01 13:43:58.003201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.114 [2024-10-01 13:43:58.003232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.114 [2024-10-01 13:43:58.006684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.114 [2024-10-01 13:43:58.006807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.114 [2024-10-01 13:43:58.006841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.114 [2024-10-01 13:43:58.006859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.114 [2024-10-01 13:43:58.006893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.114 [2024-10-01 13:43:58.006925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.114 [2024-10-01 13:43:58.006943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.114 [2024-10-01 13:43:58.006957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.114 [2024-10-01 13:43:58.006990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.114 [2024-10-01 13:43:58.014955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.114 [2024-10-01 13:43:58.015088] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.114 [2024-10-01 13:43:58.015122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.114 [2024-10-01 13:43:58.015140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.114 [2024-10-01 13:43:58.015175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.114 [2024-10-01 13:43:58.015208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.114 [2024-10-01 13:43:58.015226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.114 [2024-10-01 13:43:58.015261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.114 [2024-10-01 13:43:58.015296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.114 [2024-10-01 13:43:58.017983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.114 [2024-10-01 13:43:58.018107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.114 [2024-10-01 13:43:58.018140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.114 [2024-10-01 13:43:58.018159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.114 [2024-10-01 13:43:58.018193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.114 [2024-10-01 13:43:58.018225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.114 [2024-10-01 13:43:58.018243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.114 [2024-10-01 13:43:58.018258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.114 [2024-10-01 13:43:58.018290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.114 [2024-10-01 13:43:58.025788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.114 [2024-10-01 13:43:58.025913] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.114 [2024-10-01 13:43:58.025947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.114 [2024-10-01 13:43:58.025966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.114 [2024-10-01 13:43:58.025999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.114 [2024-10-01 13:43:58.026032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.114 [2024-10-01 13:43:58.026050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.114 [2024-10-01 13:43:58.026065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.114 [2024-10-01 13:43:58.027012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.114 [2024-10-01 13:43:58.029635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.114 [2024-10-01 13:43:58.029911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.114 [2024-10-01 13:43:58.029956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.114 [2024-10-01 13:43:58.029977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.114 [2024-10-01 13:43:58.030021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.114 [2024-10-01 13:43:58.030057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.114 [2024-10-01 13:43:58.030076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.114 [2024-10-01 13:43:58.030091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.114 [2024-10-01 13:43:58.030123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.114 [2024-10-01 13:43:58.036882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.114 [2024-10-01 13:43:58.037028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.114 [2024-10-01 13:43:58.037062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.114 [2024-10-01 13:43:58.037081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.114 [2024-10-01 13:43:58.037115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.114 [2024-10-01 13:43:58.037147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.114 [2024-10-01 13:43:58.037166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.114 [2024-10-01 13:43:58.037180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.114 [2024-10-01 13:43:58.037212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.114 [2024-10-01 13:43:58.040237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.114 [2024-10-01 13:43:58.040357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.114 [2024-10-01 13:43:58.040389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.114 [2024-10-01 13:43:58.040408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.114 [2024-10-01 13:43:58.040441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.114 [2024-10-01 13:43:58.041395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.114 [2024-10-01 13:43:58.041437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.114 [2024-10-01 13:43:58.041456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.114 [2024-10-01 13:43:58.041676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.114 [2024-10-01 13:43:58.047275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.114 [2024-10-01 13:43:58.047400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.114 [2024-10-01 13:43:58.047433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.114 [2024-10-01 13:43:58.047451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.114 [2024-10-01 13:43:58.047485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.114 [2024-10-01 13:43:58.047518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.114 [2024-10-01 13:43:58.047552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.114 [2024-10-01 13:43:58.047570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.114 [2024-10-01 13:43:58.047604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.114 [2024-10-01 13:43:58.051286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.114 [2024-10-01 13:43:58.051408] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.114 [2024-10-01 13:43:58.051441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.114 [2024-10-01 13:43:58.051460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.114 [2024-10-01 13:43:58.051494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.114 [2024-10-01 13:43:58.051573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.114 [2024-10-01 13:43:58.051595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.114 [2024-10-01 13:43:58.051610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.114 [2024-10-01 13:43:58.051642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.114 [2024-10-01 13:43:58.058514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.114 [2024-10-01 13:43:58.058678] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.114 [2024-10-01 13:43:58.058712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.115 [2024-10-01 13:43:58.058731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.115 [2024-10-01 13:43:58.058765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.115 [2024-10-01 13:43:58.058798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.115 [2024-10-01 13:43:58.058816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.115 [2024-10-01 13:43:58.058830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.115 [2024-10-01 13:43:58.058863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.115 [2024-10-01 13:43:58.061516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.115 [2024-10-01 13:43:58.061648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.115 [2024-10-01 13:43:58.061680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.115 [2024-10-01 13:43:58.061699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.115 [2024-10-01 13:43:58.061732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.115 [2024-10-01 13:43:58.061765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.115 [2024-10-01 13:43:58.061783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.115 [2024-10-01 13:43:58.061797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.115 [2024-10-01 13:43:58.061829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.115 [2024-10-01 13:43:58.068724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.115 [2024-10-01 13:43:58.068847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.115 [2024-10-01 13:43:58.068880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.115 [2024-10-01 13:43:58.068899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.115 [2024-10-01 13:43:58.068934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.115 [2024-10-01 13:43:58.069862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.115 [2024-10-01 13:43:58.069902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.115 [2024-10-01 13:43:58.069920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.115 [2024-10-01 13:43:58.070177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.115 [2024-10-01 13:43:58.072702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.115 [2024-10-01 13:43:58.072826] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.115 [2024-10-01 13:43:58.072860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.115 [2024-10-01 13:43:58.072879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.115 [2024-10-01 13:43:58.072912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.115 [2024-10-01 13:43:58.072945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.115 [2024-10-01 13:43:58.072963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.115 [2024-10-01 13:43:58.072977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.115 [2024-10-01 13:43:58.073009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.115 [2024-10-01 13:43:58.079773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.115 [2024-10-01 13:43:58.079968] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.115 [2024-10-01 13:43:58.080004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.115 [2024-10-01 13:43:58.080024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.115 [2024-10-01 13:43:58.080060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.115 [2024-10-01 13:43:58.080104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.115 [2024-10-01 13:43:58.080121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.115 [2024-10-01 13:43:58.080137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.115 [2024-10-01 13:43:58.080170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.115 [2024-10-01 13:43:58.083096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.115 [2024-10-01 13:43:58.083215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.115 [2024-10-01 13:43:58.083248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.115 [2024-10-01 13:43:58.083266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.115 [2024-10-01 13:43:58.084227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.115 [2024-10-01 13:43:58.084457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.115 [2024-10-01 13:43:58.084503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.115 [2024-10-01 13:43:58.084521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.115 [2024-10-01 13:43:58.084617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.115 [2024-10-01 13:43:58.090008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.115 [2024-10-01 13:43:58.090129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.115 [2024-10-01 13:43:58.090162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.115 [2024-10-01 13:43:58.090210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.115 [2024-10-01 13:43:58.090247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.115 [2024-10-01 13:43:58.090280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.115 [2024-10-01 13:43:58.090299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.115 [2024-10-01 13:43:58.090313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.115 [2024-10-01 13:43:58.090345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.115 [2024-10-01 13:43:58.094043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.115 [2024-10-01 13:43:58.094165] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.115 [2024-10-01 13:43:58.094209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.115 [2024-10-01 13:43:58.094228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.115 [2024-10-01 13:43:58.094262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.115 [2024-10-01 13:43:58.094295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.115 [2024-10-01 13:43:58.094313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.115 [2024-10-01 13:43:58.094327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.115 [2024-10-01 13:43:58.094359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.115 [2024-10-01 13:43:58.101288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.115 [2024-10-01 13:43:58.101420] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.115 [2024-10-01 13:43:58.101461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.115 [2024-10-01 13:43:58.101480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.115 [2024-10-01 13:43:58.101514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.115 [2024-10-01 13:43:58.101562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.115 [2024-10-01 13:43:58.101584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.115 [2024-10-01 13:43:58.101598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.115 [2024-10-01 13:43:58.101630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.115 [2024-10-01 13:43:58.104302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.115 [2024-10-01 13:43:58.104420] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.115 [2024-10-01 13:43:58.104452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.115 [2024-10-01 13:43:58.104470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.115 [2024-10-01 13:43:58.104503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.115 [2024-10-01 13:43:58.104551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.115 [2024-10-01 13:43:58.104591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.115 [2024-10-01 13:43:58.104607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.115 [2024-10-01 13:43:58.104641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.115 [2024-10-01 13:43:58.111689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.115 [2024-10-01 13:43:58.111821] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.115 [2024-10-01 13:43:58.111854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.115 [2024-10-01 13:43:58.111885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.115 [2024-10-01 13:43:58.111924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.116 [2024-10-01 13:43:58.112874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.116 [2024-10-01 13:43:58.112914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.116 [2024-10-01 13:43:58.112933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.116 [2024-10-01 13:43:58.113155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.116 [2024-10-01 13:43:58.115704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.116 [2024-10-01 13:43:58.115825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.116 [2024-10-01 13:43:58.115858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.116 [2024-10-01 13:43:58.115891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.116 [2024-10-01 13:43:58.115929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.116 [2024-10-01 13:43:58.115962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.116 [2024-10-01 13:43:58.115980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.116 [2024-10-01 13:43:58.116003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.116 [2024-10-01 13:43:58.116035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.116 [2024-10-01 13:43:58.122711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.116 [2024-10-01 13:43:58.122843] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.116 [2024-10-01 13:43:58.122878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.116 [2024-10-01 13:43:58.122897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.116 [2024-10-01 13:43:58.122931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.116 [2024-10-01 13:43:58.122963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.116 [2024-10-01 13:43:58.122981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.116 [2024-10-01 13:43:58.122995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.116 [2024-10-01 13:43:58.123027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.116 [2024-10-01 13:43:58.126108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.116 [2024-10-01 13:43:58.126256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.116 [2024-10-01 13:43:58.126290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.116 [2024-10-01 13:43:58.126309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.116 [2024-10-01 13:43:58.127265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.116 [2024-10-01 13:43:58.127499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.116 [2024-10-01 13:43:58.127550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.116 [2024-10-01 13:43:58.127572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.116 [2024-10-01 13:43:58.127655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.116 [2024-10-01 13:43:58.133024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.116 [2024-10-01 13:43:58.133147] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.116 [2024-10-01 13:43:58.133181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.116 [2024-10-01 13:43:58.133199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.116 [2024-10-01 13:43:58.133233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.116 [2024-10-01 13:43:58.133266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.116 [2024-10-01 13:43:58.133284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.116 [2024-10-01 13:43:58.133298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.116 [2024-10-01 13:43:58.133331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.116 [2024-10-01 13:43:58.137035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.116 [2024-10-01 13:43:58.137157] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.116 [2024-10-01 13:43:58.137190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.116 [2024-10-01 13:43:58.137208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.116 [2024-10-01 13:43:58.137242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.116 [2024-10-01 13:43:58.137275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.116 [2024-10-01 13:43:58.137293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.116 [2024-10-01 13:43:58.137308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.116 [2024-10-01 13:43:58.137339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.116 [2024-10-01 13:43:58.144260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.116 [2024-10-01 13:43:58.144399] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.116 [2024-10-01 13:43:58.144433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.116 [2024-10-01 13:43:58.144452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.116 [2024-10-01 13:43:58.144511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.116 [2024-10-01 13:43:58.144561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.116 [2024-10-01 13:43:58.144582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.116 [2024-10-01 13:43:58.144596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.116 [2024-10-01 13:43:58.144629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.116 [2024-10-01 13:43:58.147264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.116 [2024-10-01 13:43:58.147395] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.116 [2024-10-01 13:43:58.147428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.116 [2024-10-01 13:43:58.147447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.116 [2024-10-01 13:43:58.147480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.116 [2024-10-01 13:43:58.147513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.116 [2024-10-01 13:43:58.147531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.116 [2024-10-01 13:43:58.147566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.116 [2024-10-01 13:43:58.147599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.116 [2024-10-01 13:43:58.155513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.116 [2024-10-01 13:43:58.155671] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.116 [2024-10-01 13:43:58.155705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.116 [2024-10-01 13:43:58.155724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.116 [2024-10-01 13:43:58.155759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.116 [2024-10-01 13:43:58.156721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.116 [2024-10-01 13:43:58.156761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.116 [2024-10-01 13:43:58.156781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.116 [2024-10-01 13:43:58.156988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.116 [2024-10-01 13:43:58.157361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.116 [2024-10-01 13:43:58.157472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.116 [2024-10-01 13:43:58.157504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.116 [2024-10-01 13:43:58.157523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.116 [2024-10-01 13:43:58.158804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.116 [2024-10-01 13:43:58.159757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.117 [2024-10-01 13:43:58.159798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.117 [2024-10-01 13:43:58.159838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.117 [2024-10-01 13:43:58.160065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.117 [2024-10-01 13:43:58.166983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.117 [2024-10-01 13:43:58.167712] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.117 [2024-10-01 13:43:58.167759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.117 [2024-10-01 13:43:58.167782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.117 [2024-10-01 13:43:58.167912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.117 [2024-10-01 13:43:58.167987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.117 [2024-10-01 13:43:58.168024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.117 [2024-10-01 13:43:58.168042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.117 [2024-10-01 13:43:58.168058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.117 [2024-10-01 13:43:58.168090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.117 [2024-10-01 13:43:58.168155] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.117 [2024-10-01 13:43:58.168183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.117 [2024-10-01 13:43:58.168201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.117 [2024-10-01 13:43:58.168235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.117 [2024-10-01 13:43:58.168499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.117 [2024-10-01 13:43:58.168527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.117 [2024-10-01 13:43:58.168560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.117 [2024-10-01 13:43:58.168708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.117 [2024-10-01 13:43:58.178945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.117 [2024-10-01 13:43:58.179066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.117 [2024-10-01 13:43:58.179182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.117 [2024-10-01 13:43:58.179216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.117 [2024-10-01 13:43:58.179235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.117 [2024-10-01 13:43:58.179314] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.117 [2024-10-01 13:43:58.179342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.117 [2024-10-01 13:43:58.179359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.117 [2024-10-01 13:43:58.179381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.117 [2024-10-01 13:43:58.179414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.117 [2024-10-01 13:43:58.179467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.117 [2024-10-01 13:43:58.179483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.117 [2024-10-01 13:43:58.179499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.117 [2024-10-01 13:43:58.179550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.117 [2024-10-01 13:43:58.179575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.117 [2024-10-01 13:43:58.179590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.117 [2024-10-01 13:43:58.179604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.117 [2024-10-01 13:43:58.179636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.117 [2024-10-01 13:43:58.190677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.117 [2024-10-01 13:43:58.190775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.117 [2024-10-01 13:43:58.190945] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.117 [2024-10-01 13:43:58.191018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.117 [2024-10-01 13:43:58.191054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.117 [2024-10-01 13:43:58.191151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.117 [2024-10-01 13:43:58.191198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.117 [2024-10-01 13:43:58.191234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.117 [2024-10-01 13:43:58.191292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.117 [2024-10-01 13:43:58.191322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.117 [2024-10-01 13:43:58.191351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.117 [2024-10-01 13:43:58.191369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.117 [2024-10-01 13:43:58.191384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.117 [2024-10-01 13:43:58.191402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.117 [2024-10-01 13:43:58.191418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.117 [2024-10-01 13:43:58.191432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.117 [2024-10-01 13:43:58.192694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.117 [2024-10-01 13:43:58.192736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.117 [2024-10-01 13:43:58.201700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.117 [2024-10-01 13:43:58.201771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.117 [2024-10-01 13:43:58.201879] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.117 [2024-10-01 13:43:58.201913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.117 [2024-10-01 13:43:58.201932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.117 [2024-10-01 13:43:58.202014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.117 [2024-10-01 13:43:58.202042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.117 [2024-10-01 13:43:58.202060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.117 [2024-10-01 13:43:58.203000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.117 [2024-10-01 13:43:58.203063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.117 [2024-10-01 13:43:58.203296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.117 [2024-10-01 13:43:58.203335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.117 [2024-10-01 13:43:58.203354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.117 [2024-10-01 13:43:58.203373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.117 [2024-10-01 13:43:58.203388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.117 [2024-10-01 13:43:58.203402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.117 [2024-10-01 13:43:58.203517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.117 [2024-10-01 13:43:58.203564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.117 [2024-10-01 13:43:58.213240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.117 [2024-10-01 13:43:58.213302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.117 [2024-10-01 13:43:58.213665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.117 [2024-10-01 13:43:58.213712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.117 [2024-10-01 13:43:58.213734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.117 [2024-10-01 13:43:58.213788] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.117 [2024-10-01 13:43:58.213814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.117 [2024-10-01 13:43:58.213831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.117 [2024-10-01 13:43:58.213977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.117 [2024-10-01 13:43:58.214015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.117 [2024-10-01 13:43:58.214156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.117 [2024-10-01 13:43:58.214183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.117 [2024-10-01 13:43:58.214200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.117 [2024-10-01 13:43:58.214219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.117 [2024-10-01 13:43:58.214234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.117 [2024-10-01 13:43:58.214248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.117 [2024-10-01 13:43:58.214289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.117 [2024-10-01 13:43:58.214331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.118 [2024-10-01 13:43:58.224063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.118 [2024-10-01 13:43:58.224126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.118 [2024-10-01 13:43:58.224247] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.118 [2024-10-01 13:43:58.224281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.118 [2024-10-01 13:43:58.224299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.118 [2024-10-01 13:43:58.224350] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.118 [2024-10-01 13:43:58.224376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.118 [2024-10-01 13:43:58.224392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.118 [2024-10-01 13:43:58.224426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.118 [2024-10-01 13:43:58.224450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.118 [2024-10-01 13:43:58.224476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.118 [2024-10-01 13:43:58.224494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.118 [2024-10-01 13:43:58.224509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.118 [2024-10-01 13:43:58.224527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.118 [2024-10-01 13:43:58.224560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.118 [2024-10-01 13:43:58.224576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.118 [2024-10-01 13:43:58.224609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.118 [2024-10-01 13:43:58.224629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.118 [2024-10-01 13:43:58.236134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.118 [2024-10-01 13:43:58.236203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.118 [2024-10-01 13:43:58.236326] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.118 [2024-10-01 13:43:58.236361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.118 [2024-10-01 13:43:58.236380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.118 [2024-10-01 13:43:58.236432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.118 [2024-10-01 13:43:58.236458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.118 [2024-10-01 13:43:58.236474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.118 [2024-10-01 13:43:58.236511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.118 [2024-10-01 13:43:58.236552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.118 [2024-10-01 13:43:58.236586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.118 [2024-10-01 13:43:58.236627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.118 [2024-10-01 13:43:58.236643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.118 [2024-10-01 13:43:58.236667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.118 [2024-10-01 13:43:58.236683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.118 [2024-10-01 13:43:58.236697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.118 [2024-10-01 13:43:58.236731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.118 [2024-10-01 13:43:58.236752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.118 [2024-10-01 13:43:58.246586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.118 [2024-10-01 13:43:58.246644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.118 [2024-10-01 13:43:58.246769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.118 [2024-10-01 13:43:58.246803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.118 [2024-10-01 13:43:58.246821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.118 [2024-10-01 13:43:58.246873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.118 [2024-10-01 13:43:58.246898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.118 [2024-10-01 13:43:58.246915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.118 [2024-10-01 13:43:58.247850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.118 [2024-10-01 13:43:58.247912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.118 [2024-10-01 13:43:58.248120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.118 [2024-10-01 13:43:58.248150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.118 [2024-10-01 13:43:58.248176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.118 [2024-10-01 13:43:58.248194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.118 [2024-10-01 13:43:58.248211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.118 [2024-10-01 13:43:58.248224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.118 [2024-10-01 13:43:58.248302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.118 [2024-10-01 13:43:58.248324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.118 [2024-10-01 13:43:58.258035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.118 [2024-10-01 13:43:58.258108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.118 [2024-10-01 13:43:58.258219] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.118 [2024-10-01 13:43:58.258253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.118 [2024-10-01 13:43:58.258272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.118 [2024-10-01 13:43:58.258323] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.118 [2024-10-01 13:43:58.258377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.118 [2024-10-01 13:43:58.258398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.118 [2024-10-01 13:43:58.258432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.118 [2024-10-01 13:43:58.258456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.118 [2024-10-01 13:43:58.258483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.118 [2024-10-01 13:43:58.258501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.118 [2024-10-01 13:43:58.258516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.118 [2024-10-01 13:43:58.258547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.118 [2024-10-01 13:43:58.258566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.118 [2024-10-01 13:43:58.258580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.118 [2024-10-01 13:43:58.258614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.118 [2024-10-01 13:43:58.258635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.118 [2024-10-01 13:43:58.269150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.118 [2024-10-01 13:43:58.269215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.118 [2024-10-01 13:43:58.269344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.118 [2024-10-01 13:43:58.269378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.118 [2024-10-01 13:43:58.269397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.118 [2024-10-01 13:43:58.269448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.118 [2024-10-01 13:43:58.269474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.118 [2024-10-01 13:43:58.269490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.118 [2024-10-01 13:43:58.269525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.118 [2024-10-01 13:43:58.269566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.118 [2024-10-01 13:43:58.269597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.118 [2024-10-01 13:43:58.269616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.118 [2024-10-01 13:43:58.269631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.118 [2024-10-01 13:43:58.269649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.118 [2024-10-01 13:43:58.269664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.118 [2024-10-01 13:43:58.269678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.118 [2024-10-01 13:43:58.269710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.118 [2024-10-01 13:43:58.269730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.118 [2024-10-01 13:43:58.280821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.118 [2024-10-01 13:43:58.280890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.118 [2024-10-01 13:43:58.281026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.119 [2024-10-01 13:43:58.281060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.119 [2024-10-01 13:43:58.281079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.119 [2024-10-01 13:43:58.281130] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.119 [2024-10-01 13:43:58.281155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.119 [2024-10-01 13:43:58.281172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.119 [2024-10-01 13:43:58.281206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.119 [2024-10-01 13:43:58.281231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.119 [2024-10-01 13:43:58.281258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.119 [2024-10-01 13:43:58.281276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.119 [2024-10-01 13:43:58.281291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.119 [2024-10-01 13:43:58.281309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.119 [2024-10-01 13:43:58.281324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.119 [2024-10-01 13:43:58.281340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.119 [2024-10-01 13:43:58.281373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.119 [2024-10-01 13:43:58.281393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.119 [2024-10-01 13:43:58.291692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.119 [2024-10-01 13:43:58.291789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.119 [2024-10-01 13:43:58.292856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.119 [2024-10-01 13:43:58.292908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.119 [2024-10-01 13:43:58.292931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.119 [2024-10-01 13:43:58.292987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.119 [2024-10-01 13:43:58.293012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.119 [2024-10-01 13:43:58.293029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.119 [2024-10-01 13:43:58.293224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.119 [2024-10-01 13:43:58.293256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.119 [2024-10-01 13:43:58.293394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.119 [2024-10-01 13:43:58.293420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.119 [2024-10-01 13:43:58.293466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.119 [2024-10-01 13:43:58.293487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.119 [2024-10-01 13:43:58.293503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.119 [2024-10-01 13:43:58.293518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.119 [2024-10-01 13:43:58.294803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.119 [2024-10-01 13:43:58.294849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.119 [2024-10-01 13:43:58.303494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.119 [2024-10-01 13:43:58.303568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.119 [2024-10-01 13:43:58.303792] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.119 [2024-10-01 13:43:58.303827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.119 [2024-10-01 13:43:58.303847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.119 [2024-10-01 13:43:58.303917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.119 [2024-10-01 13:43:58.303945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.119 [2024-10-01 13:43:58.303962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.119 [2024-10-01 13:43:58.304089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.119 [2024-10-01 13:43:58.304121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.119 [2024-10-01 13:43:58.304158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.119 [2024-10-01 13:43:58.304178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.119 [2024-10-01 13:43:58.304194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.119 [2024-10-01 13:43:58.304212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.119 [2024-10-01 13:43:58.304228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.119 [2024-10-01 13:43:58.304241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.119 [2024-10-01 13:43:58.304275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.119 [2024-10-01 13:43:58.304295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.119 [2024-10-01 13:43:58.313656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.119 [2024-10-01 13:43:58.313743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.119 [2024-10-01 13:43:58.313832] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.119 [2024-10-01 13:43:58.313864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.119 [2024-10-01 13:43:58.313883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.119 [2024-10-01 13:43:58.313952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.119 [2024-10-01 13:43:58.313980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.119 [2024-10-01 13:43:58.314021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.119 [2024-10-01 13:43:58.314042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.119 [2024-10-01 13:43:58.314076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.119 [2024-10-01 13:43:58.314097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.119 [2024-10-01 13:43:58.314112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.119 [2024-10-01 13:43:58.314126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.119 [2024-10-01 13:43:58.314162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.119 [2024-10-01 13:43:58.314182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.119 [2024-10-01 13:43:58.314196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.119 [2024-10-01 13:43:58.314211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.119 [2024-10-01 13:43:58.314241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.119 [2024-10-01 13:43:58.324623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.119 [2024-10-01 13:43:58.324686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.119 [2024-10-01 13:43:58.324796] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.119 [2024-10-01 13:43:58.324831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.119 [2024-10-01 13:43:58.324850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.119 [2024-10-01 13:43:58.324901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.119 [2024-10-01 13:43:58.324927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.119 [2024-10-01 13:43:58.324944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.119 [2024-10-01 13:43:58.324992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.119 [2024-10-01 13:43:58.325021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.119 [2024-10-01 13:43:58.325066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.119 [2024-10-01 13:43:58.325088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.119 [2024-10-01 13:43:58.325104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.119 [2024-10-01 13:43:58.325122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.119 [2024-10-01 13:43:58.325137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.119 [2024-10-01 13:43:58.325151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.119 [2024-10-01 13:43:58.325183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.119 [2024-10-01 13:43:58.325203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.119 [2024-10-01 13:43:58.334999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.119 [2024-10-01 13:43:58.335091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.119 [2024-10-01 13:43:58.336179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.119 [2024-10-01 13:43:58.336232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.119 [2024-10-01 13:43:58.336255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.119 [2024-10-01 13:43:58.336312] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.120 [2024-10-01 13:43:58.336338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.120 [2024-10-01 13:43:58.336355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.120 [2024-10-01 13:43:58.336569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.120 [2024-10-01 13:43:58.336613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.120 [2024-10-01 13:43:58.336759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.120 [2024-10-01 13:43:58.336824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.120 [2024-10-01 13:43:58.336862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.120 [2024-10-01 13:43:58.336896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.120 [2024-10-01 13:43:58.336929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.120 [2024-10-01 13:43:58.336958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.120 [2024-10-01 13:43:58.338615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.120 [2024-10-01 13:43:58.338679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.120 [2024-10-01 13:43:58.345822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.120 [2024-10-01 13:43:58.345881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.120 [2024-10-01 13:43:58.345992] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.120 [2024-10-01 13:43:58.346025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.120 [2024-10-01 13:43:58.346044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.120 [2024-10-01 13:43:58.346096] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.120 [2024-10-01 13:43:58.346121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.120 [2024-10-01 13:43:58.346138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.120 [2024-10-01 13:43:58.346172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.120 [2024-10-01 13:43:58.346195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.120 [2024-10-01 13:43:58.346223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.120 [2024-10-01 13:43:58.346240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.120 [2024-10-01 13:43:58.346255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.120 [2024-10-01 13:43:58.346294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.120 [2024-10-01 13:43:58.346312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.120 [2024-10-01 13:43:58.346326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.120 [2024-10-01 13:43:58.346360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.120 [2024-10-01 13:43:58.346380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.120 [2024-10-01 13:43:58.356220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.120 [2024-10-01 13:43:58.356285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.120 [2024-10-01 13:43:58.356395] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.120 [2024-10-01 13:43:58.356428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.120 [2024-10-01 13:43:58.356447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.120 [2024-10-01 13:43:58.356498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.120 [2024-10-01 13:43:58.356524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.120 [2024-10-01 13:43:58.356558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.120 [2024-10-01 13:43:58.356596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.120 [2024-10-01 13:43:58.356620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.120 [2024-10-01 13:43:58.356647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.120 [2024-10-01 13:43:58.356665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.120 [2024-10-01 13:43:58.356680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.120 [2024-10-01 13:43:58.356697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.120 [2024-10-01 13:43:58.356713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.120 [2024-10-01 13:43:58.356726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.120 [2024-10-01 13:43:58.356758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.120 [2024-10-01 13:43:58.356778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.120 [2024-10-01 13:43:58.367485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.120 [2024-10-01 13:43:58.367565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.120 [2024-10-01 13:43:58.367687] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.120 [2024-10-01 13:43:58.367720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.120 [2024-10-01 13:43:58.367739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.120 [2024-10-01 13:43:58.367790] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.120 [2024-10-01 13:43:58.367815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.120 [2024-10-01 13:43:58.367831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.120 [2024-10-01 13:43:58.367919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.120 [2024-10-01 13:43:58.367949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.120 [2024-10-01 13:43:58.367978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.120 [2024-10-01 13:43:58.367997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.120 [2024-10-01 13:43:58.368012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.120 [2024-10-01 13:43:58.368030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.120 [2024-10-01 13:43:58.368046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.120 [2024-10-01 13:43:58.368059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.120 [2024-10-01 13:43:58.368091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.120 [2024-10-01 13:43:58.368111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.120 [2024-10-01 13:43:58.377985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.120 [2024-10-01 13:43:58.378077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.120 [2024-10-01 13:43:58.378209] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.120 [2024-10-01 13:43:58.378244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.120 [2024-10-01 13:43:58.378263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.120 [2024-10-01 13:43:58.378315] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.120 [2024-10-01 13:43:58.378340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.120 [2024-10-01 13:43:58.378357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.120 [2024-10-01 13:43:58.379303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.120 [2024-10-01 13:43:58.379354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.120 [2024-10-01 13:43:58.379568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.120 [2024-10-01 13:43:58.379605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.120 [2024-10-01 13:43:58.379624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.120 [2024-10-01 13:43:58.379642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.120 [2024-10-01 13:43:58.379658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.120 [2024-10-01 13:43:58.379673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.120 [2024-10-01 13:43:58.379788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.120 [2024-10-01 13:43:58.379821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.120 [2024-10-01 13:43:58.389048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.120 [2024-10-01 13:43:58.389107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.120 [2024-10-01 13:43:58.389242] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.120 [2024-10-01 13:43:58.389278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.120 [2024-10-01 13:43:58.389296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.120 [2024-10-01 13:43:58.389348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.120 [2024-10-01 13:43:58.389374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.120 [2024-10-01 13:43:58.389390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.120 [2024-10-01 13:43:58.389424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.121 [2024-10-01 13:43:58.389448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.121 [2024-10-01 13:43:58.389475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.121 [2024-10-01 13:43:58.389493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.121 [2024-10-01 13:43:58.389508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.121 [2024-10-01 13:43:58.389526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.121 [2024-10-01 13:43:58.389559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.121 [2024-10-01 13:43:58.389574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.121 [2024-10-01 13:43:58.389839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.121 [2024-10-01 13:43:58.389866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.121 [2024-10-01 13:43:58.399277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.121 [2024-10-01 13:43:58.399331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.121 [2024-10-01 13:43:58.399430] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.121 [2024-10-01 13:43:58.399462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.121 [2024-10-01 13:43:58.399481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.121 [2024-10-01 13:43:58.399531] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.121 [2024-10-01 13:43:58.399574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.121 [2024-10-01 13:43:58.399592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.121 [2024-10-01 13:43:58.399633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.121 [2024-10-01 13:43:58.399657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.121 [2024-10-01 13:43:58.399684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.121 [2024-10-01 13:43:58.399702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.121 [2024-10-01 13:43:58.399716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.121 [2024-10-01 13:43:58.399733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.121 [2024-10-01 13:43:58.399768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.121 [2024-10-01 13:43:58.399783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.121 [2024-10-01 13:43:58.399818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.121 [2024-10-01 13:43:58.399838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.121 [2024-10-01 13:43:58.410559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.121 [2024-10-01 13:43:58.410619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.121 [2024-10-01 13:43:58.410726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.121 [2024-10-01 13:43:58.410758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.121 [2024-10-01 13:43:58.410776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.121 [2024-10-01 13:43:58.410827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.121 [2024-10-01 13:43:58.410853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.121 [2024-10-01 13:43:58.410869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.121 [2024-10-01 13:43:58.410904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.121 [2024-10-01 13:43:58.410928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.121 [2024-10-01 13:43:58.410955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.121 [2024-10-01 13:43:58.410973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.121 [2024-10-01 13:43:58.410995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.121 [2024-10-01 13:43:58.411012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.121 [2024-10-01 13:43:58.411028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.121 [2024-10-01 13:43:58.411042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.121 [2024-10-01 13:43:58.411075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.121 [2024-10-01 13:43:58.411095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.121 [2024-10-01 13:43:58.420801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.121 [2024-10-01 13:43:58.420861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.121 [2024-10-01 13:43:58.420971] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.121 [2024-10-01 13:43:58.421003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.121 [2024-10-01 13:43:58.421022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.121 [2024-10-01 13:43:58.421072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.121 [2024-10-01 13:43:58.421097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.121 [2024-10-01 13:43:58.421113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.121 [2024-10-01 13:43:58.422049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.121 [2024-10-01 13:43:58.422123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.121 [2024-10-01 13:43:58.422349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.121 [2024-10-01 13:43:58.422378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.121 [2024-10-01 13:43:58.422395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.121 [2024-10-01 13:43:58.422413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.121 [2024-10-01 13:43:58.422429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.121 [2024-10-01 13:43:58.422444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.121 [2024-10-01 13:43:58.423753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.121 [2024-10-01 13:43:58.423798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.121 [2024-10-01 13:43:58.431955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.121 [2024-10-01 13:43:58.432012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.121 [2024-10-01 13:43:58.432124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.121 [2024-10-01 13:43:58.432158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.121 [2024-10-01 13:43:58.432176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.121 [2024-10-01 13:43:58.432228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.121 [2024-10-01 13:43:58.432254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.121 [2024-10-01 13:43:58.432272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.121 [2024-10-01 13:43:58.432306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.121 [2024-10-01 13:43:58.432330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.121 [2024-10-01 13:43:58.432357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.121 [2024-10-01 13:43:58.432375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.121 [2024-10-01 13:43:58.432390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.121 [2024-10-01 13:43:58.432407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.121 [2024-10-01 13:43:58.432423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.121 [2024-10-01 13:43:58.432437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.121 [2024-10-01 13:43:58.432721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.121 [2024-10-01 13:43:58.432750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.122 [2024-10-01 13:43:58.442179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.122 [2024-10-01 13:43:58.442257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.122 [2024-10-01 13:43:58.442357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.122 [2024-10-01 13:43:58.442394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.122 [2024-10-01 13:43:58.442435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.122 [2024-10-01 13:43:58.442509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.122 [2024-10-01 13:43:58.442553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.122 [2024-10-01 13:43:58.442574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.122 [2024-10-01 13:43:58.442594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.122 [2024-10-01 13:43:58.442628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.122 [2024-10-01 13:43:58.442650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.122 [2024-10-01 13:43:58.442664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.122 [2024-10-01 13:43:58.442678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.122 [2024-10-01 13:43:58.442711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.122 [2024-10-01 13:43:58.442732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.122 [2024-10-01 13:43:58.442746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.122 [2024-10-01 13:43:58.442760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.122 [2024-10-01 13:43:58.442790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.122 [2024-10-01 13:43:58.453632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.122 [2024-10-01 13:43:58.453684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.122 [2024-10-01 13:43:58.453787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.122 [2024-10-01 13:43:58.453819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.122 [2024-10-01 13:43:58.453838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.122 [2024-10-01 13:43:58.453888] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.122 [2024-10-01 13:43:58.453913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.122 [2024-10-01 13:43:58.453929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.122 [2024-10-01 13:43:58.453963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.122 [2024-10-01 13:43:58.453986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.122 [2024-10-01 13:43:58.454014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.122 [2024-10-01 13:43:58.454035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.122 [2024-10-01 13:43:58.454049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.122 [2024-10-01 13:43:58.454067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.122 [2024-10-01 13:43:58.454082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.122 [2024-10-01 13:43:58.454112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.122 [2024-10-01 13:43:58.454148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.122 [2024-10-01 13:43:58.454169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.122 [2024-10-01 13:43:58.463991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.122 [2024-10-01 13:43:58.464060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.122 [2024-10-01 13:43:58.464173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.122 [2024-10-01 13:43:58.464207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.122 [2024-10-01 13:43:58.464226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.122 [2024-10-01 13:43:58.464280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.122 [2024-10-01 13:43:58.464305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.122 [2024-10-01 13:43:58.464322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.122 [2024-10-01 13:43:58.465271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.122 [2024-10-01 13:43:58.465318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.122 [2024-10-01 13:43:58.465527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.122 [2024-10-01 13:43:58.465571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.122 [2024-10-01 13:43:58.465589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.122 [2024-10-01 13:43:58.465607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.122 [2024-10-01 13:43:58.465624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.122 [2024-10-01 13:43:58.465638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.122 [2024-10-01 13:43:58.465720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.122 [2024-10-01 13:43:58.465743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.122 [2024-10-01 13:43:58.475069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.122 [2024-10-01 13:43:58.475125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.122 [2024-10-01 13:43:58.475239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.122 [2024-10-01 13:43:58.475272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.122 [2024-10-01 13:43:58.475291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.122 [2024-10-01 13:43:58.475342] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.122 [2024-10-01 13:43:58.475367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.122 [2024-10-01 13:43:58.475385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.122 [2024-10-01 13:43:58.475418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.122 [2024-10-01 13:43:58.475442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.122 [2024-10-01 13:43:58.475499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.122 [2024-10-01 13:43:58.475518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.122 [2024-10-01 13:43:58.475552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.122 [2024-10-01 13:43:58.475585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.122 [2024-10-01 13:43:58.475607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.122 [2024-10-01 13:43:58.475622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.122 [2024-10-01 13:43:58.475903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.122 [2024-10-01 13:43:58.475932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.122 [2024-10-01 13:43:58.485490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.122 [2024-10-01 13:43:58.485589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.122 [2024-10-01 13:43:58.485732] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.122 [2024-10-01 13:43:58.485765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.122 [2024-10-01 13:43:58.485785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.122 [2024-10-01 13:43:58.485837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.122 [2024-10-01 13:43:58.485862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.122 [2024-10-01 13:43:58.485879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.122 [2024-10-01 13:43:58.485914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.122 [2024-10-01 13:43:58.485939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.122 [2024-10-01 13:43:58.485966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.122 [2024-10-01 13:43:58.485984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.122 [2024-10-01 13:43:58.486000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.122 [2024-10-01 13:43:58.486017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.122 [2024-10-01 13:43:58.486032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.122 [2024-10-01 13:43:58.486046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.122 [2024-10-01 13:43:58.486078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.122 [2024-10-01 13:43:58.486098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.122 [2024-10-01 13:43:58.497792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.123 [2024-10-01 13:43:58.497865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.123 [2024-10-01 13:43:58.497989] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.123 [2024-10-01 13:43:58.498022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.123 [2024-10-01 13:43:58.498041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.123 [2024-10-01 13:43:58.498126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.123 [2024-10-01 13:43:58.498153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.123 [2024-10-01 13:43:58.498170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.123 [2024-10-01 13:43:58.498205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.123 [2024-10-01 13:43:58.498230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.123 [2024-10-01 13:43:58.498257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.123 [2024-10-01 13:43:58.498275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.123 [2024-10-01 13:43:58.498290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.123 [2024-10-01 13:43:58.498308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.123 [2024-10-01 13:43:58.498324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.123 [2024-10-01 13:43:58.498337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.123 [2024-10-01 13:43:58.498370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.123 [2024-10-01 13:43:58.498390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.123 [2024-10-01 13:43:58.508849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.123 [2024-10-01 13:43:58.508941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.123 [2024-10-01 13:43:58.509089] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.123 [2024-10-01 13:43:58.509125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.123 [2024-10-01 13:43:58.509144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.123 [2024-10-01 13:43:58.509197] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.123 [2024-10-01 13:43:58.509222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.123 [2024-10-01 13:43:58.509239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.123 [2024-10-01 13:43:58.510206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.123 [2024-10-01 13:43:58.510253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.123 [2024-10-01 13:43:58.510477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.123 [2024-10-01 13:43:58.510516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.123 [2024-10-01 13:43:58.510549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.123 [2024-10-01 13:43:58.510571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.123 [2024-10-01 13:43:58.510588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.123 [2024-10-01 13:43:58.510602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.123 [2024-10-01 13:43:58.510743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.123 [2024-10-01 13:43:58.510768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.123 [2024-10-01 13:43:58.520110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.123 [2024-10-01 13:43:58.520197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.123 [2024-10-01 13:43:58.520332] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.123 [2024-10-01 13:43:58.520367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.123 [2024-10-01 13:43:58.520386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.123 [2024-10-01 13:43:58.520438] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.123 [2024-10-01 13:43:58.520464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.123 [2024-10-01 13:43:58.520481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.123 [2024-10-01 13:43:58.520516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.123 [2024-10-01 13:43:58.520559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.123 [2024-10-01 13:43:58.520828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.123 [2024-10-01 13:43:58.520857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.123 [2024-10-01 13:43:58.520873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.123 [2024-10-01 13:43:58.520892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.123 [2024-10-01 13:43:58.520907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.123 [2024-10-01 13:43:58.520921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.123 [2024-10-01 13:43:58.521068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.123 [2024-10-01 13:43:58.521095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.123 [2024-10-01 13:43:58.530305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.123 [2024-10-01 13:43:58.530358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.123 [2024-10-01 13:43:58.530464] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.123 [2024-10-01 13:43:58.530497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.123 [2024-10-01 13:43:58.530516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.123 [2024-10-01 13:43:58.530584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.123 [2024-10-01 13:43:58.530612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.123 [2024-10-01 13:43:58.530629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.123 [2024-10-01 13:43:58.530663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.123 [2024-10-01 13:43:58.530687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.123 [2024-10-01 13:43:58.530714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.123 [2024-10-01 13:43:58.530762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.123 [2024-10-01 13:43:58.530779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.123 [2024-10-01 13:43:58.530796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.123 [2024-10-01 13:43:58.530812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.123 [2024-10-01 13:43:58.530825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.123 [2024-10-01 13:43:58.530858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.123 [2024-10-01 13:43:58.530878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.123 [2024-10-01 13:43:58.541698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.123 [2024-10-01 13:43:58.541759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.123 [2024-10-01 13:43:58.541869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.123 [2024-10-01 13:43:58.541903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.123 [2024-10-01 13:43:58.541922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.123 [2024-10-01 13:43:58.541974] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.123 [2024-10-01 13:43:58.541999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.123 [2024-10-01 13:43:58.542015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.123 [2024-10-01 13:43:58.542049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.123 [2024-10-01 13:43:58.542073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.123 [2024-10-01 13:43:58.542101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.123 [2024-10-01 13:43:58.542119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.123 [2024-10-01 13:43:58.542134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.123 [2024-10-01 13:43:58.542153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.123 [2024-10-01 13:43:58.542169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.123 [2024-10-01 13:43:58.542182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.123 [2024-10-01 13:43:58.542215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.123 [2024-10-01 13:43:58.542235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.123 [2024-10-01 13:43:58.551855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.123 [2024-10-01 13:43:58.551993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.123 [2024-10-01 13:43:58.552111] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.123 [2024-10-01 13:43:58.552145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.124 [2024-10-01 13:43:58.552164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.124 [2024-10-01 13:43:58.553189] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.124 [2024-10-01 13:43:58.553236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.124 [2024-10-01 13:43:58.553258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.124 [2024-10-01 13:43:58.553280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.124 [2024-10-01 13:43:58.553499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.124 [2024-10-01 13:43:58.553530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.124 [2024-10-01 13:43:58.553564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.124 [2024-10-01 13:43:58.553581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.124 [2024-10-01 13:43:58.553700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.124 [2024-10-01 13:43:58.553723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.124 [2024-10-01 13:43:58.553738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.124 [2024-10-01 13:43:58.553752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.124 [2024-10-01 13:43:58.554992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.124 [2024-10-01 13:43:58.563186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.124 [2024-10-01 13:43:58.563266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.124 [2024-10-01 13:43:58.563425] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.124 [2024-10-01 13:43:58.563462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.124 [2024-10-01 13:43:58.563481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.124 [2024-10-01 13:43:58.563555] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.124 [2024-10-01 13:43:58.563600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.124 [2024-10-01 13:43:58.563622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.124 [2024-10-01 13:43:58.563662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.124 [2024-10-01 13:43:58.563688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.124 [2024-10-01 13:43:58.563715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.124 [2024-10-01 13:43:58.563733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.124 [2024-10-01 13:43:58.563749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.124 [2024-10-01 13:43:58.563767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.124 [2024-10-01 13:43:58.563783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.124 [2024-10-01 13:43:58.563796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.124 [2024-10-01 13:43:58.563829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.124 [2024-10-01 13:43:58.563849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.124 [2024-10-01 13:43:58.575184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.124 [2024-10-01 13:43:58.575305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.124 [2024-10-01 13:43:58.575461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.124 [2024-10-01 13:43:58.575501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.124 [2024-10-01 13:43:58.575522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.124 [2024-10-01 13:43:58.575595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.124 [2024-10-01 13:43:58.575633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.124 [2024-10-01 13:43:58.575653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.124 [2024-10-01 13:43:58.575691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.124 [2024-10-01 13:43:58.575718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.124 [2024-10-01 13:43:58.575769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.124 [2024-10-01 13:43:58.575794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.124 [2024-10-01 13:43:58.575810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.124 [2024-10-01 13:43:58.575829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.124 [2024-10-01 13:43:58.575845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.124 [2024-10-01 13:43:58.575858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.124 [2024-10-01 13:43:58.575909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.124 [2024-10-01 13:43:58.575932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.124 [2024-10-01 13:43:58.586399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.124 [2024-10-01 13:43:58.586498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.124 [2024-10-01 13:43:58.586654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.124 [2024-10-01 13:43:58.586692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.124 [2024-10-01 13:43:58.586712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.124 [2024-10-01 13:43:58.586765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.124 [2024-10-01 13:43:58.586791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.124 [2024-10-01 13:43:58.586807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.124 [2024-10-01 13:43:58.586844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.124 [2024-10-01 13:43:58.586868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.124 [2024-10-01 13:43:58.586896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.124 [2024-10-01 13:43:58.586914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.124 [2024-10-01 13:43:58.586963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.124 [2024-10-01 13:43:58.586983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.124 [2024-10-01 13:43:58.586999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.124 [2024-10-01 13:43:58.587012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.124 [2024-10-01 13:43:58.587059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.124 [2024-10-01 13:43:58.587092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.124 [2024-10-01 13:43:58.596797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.124 [2024-10-01 13:43:58.596894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.124 [2024-10-01 13:43:58.597031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.124 [2024-10-01 13:43:58.597068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.124 [2024-10-01 13:43:58.597087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.124 [2024-10-01 13:43:58.597140] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.124 [2024-10-01 13:43:58.597165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.124 [2024-10-01 13:43:58.597182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.124 [2024-10-01 13:43:58.598140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.124 [2024-10-01 13:43:58.598187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.124 [2024-10-01 13:43:58.598386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.124 [2024-10-01 13:43:58.598432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.124 [2024-10-01 13:43:58.598451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.124 [2024-10-01 13:43:58.598470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.124 [2024-10-01 13:43:58.598486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.124 [2024-10-01 13:43:58.598500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.124 [2024-10-01 13:43:58.598631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.124 [2024-10-01 13:43:58.598656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.124 [2024-10-01 13:43:58.607806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.124 [2024-10-01 13:43:58.607864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.124 [2024-10-01 13:43:58.607988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.124 [2024-10-01 13:43:58.608021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.124 [2024-10-01 13:43:58.608039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.124 [2024-10-01 13:43:58.608090] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.124 [2024-10-01 13:43:58.608116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.125 [2024-10-01 13:43:58.608160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.125 [2024-10-01 13:43:58.608198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.125 [2024-10-01 13:43:58.608221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.125 [2024-10-01 13:43:58.608249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.125 [2024-10-01 13:43:58.608267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.125 [2024-10-01 13:43:58.608282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.125 [2024-10-01 13:43:58.608299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.125 [2024-10-01 13:43:58.608315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.125 [2024-10-01 13:43:58.608329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.125 [2024-10-01 13:43:58.608361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.125 [2024-10-01 13:43:58.608381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.125 [2024-10-01 13:43:58.619831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.125 [2024-10-01 13:43:58.619935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.125 [2024-10-01 13:43:58.620106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.125 [2024-10-01 13:43:58.620143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.125 [2024-10-01 13:43:58.620163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.125 [2024-10-01 13:43:58.620215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.125 [2024-10-01 13:43:58.620240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.125 [2024-10-01 13:43:58.620257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.125 [2024-10-01 13:43:58.620293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.125 [2024-10-01 13:43:58.620317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.125 [2024-10-01 13:43:58.620344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.125 [2024-10-01 13:43:58.620363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.125 [2024-10-01 13:43:58.620378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.125 [2024-10-01 13:43:58.620396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.125 [2024-10-01 13:43:58.620412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.125 [2024-10-01 13:43:58.620426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.125 [2024-10-01 13:43:58.620459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.125 [2024-10-01 13:43:58.620481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.125 [2024-10-01 13:43:58.631707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.125 [2024-10-01 13:43:58.631804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.125 [2024-10-01 13:43:58.632085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.125 [2024-10-01 13:43:58.632120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.125 [2024-10-01 13:43:58.632139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.125 [2024-10-01 13:43:58.632191] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.125 [2024-10-01 13:43:58.632217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.125 [2024-10-01 13:43:58.632233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.125 [2024-10-01 13:43:58.632276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.125 [2024-10-01 13:43:58.632302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.125 [2024-10-01 13:43:58.632330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.125 [2024-10-01 13:43:58.632349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.125 [2024-10-01 13:43:58.632365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.125 [2024-10-01 13:43:58.632384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.125 [2024-10-01 13:43:58.632400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.125 [2024-10-01 13:43:58.632414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.125 [2024-10-01 13:43:58.632447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.125 [2024-10-01 13:43:58.632467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.125 [2024-10-01 13:43:58.643026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.125 [2024-10-01 13:43:58.643129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.125 [2024-10-01 13:43:58.643269] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.125 [2024-10-01 13:43:58.643306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.125 [2024-10-01 13:43:58.643325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.125 [2024-10-01 13:43:58.643377] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.125 [2024-10-01 13:43:58.643403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.125 [2024-10-01 13:43:58.643422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.125 [2024-10-01 13:43:58.644439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.125 [2024-10-01 13:43:58.644489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.125 [2024-10-01 13:43:58.644718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.125 [2024-10-01 13:43:58.644766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.125 [2024-10-01 13:43:58.644786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.125 [2024-10-01 13:43:58.644836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.125 [2024-10-01 13:43:58.644855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.125 [2024-10-01 13:43:58.644869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.125 [2024-10-01 13:43:58.644988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.125 [2024-10-01 13:43:58.645012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.125 [2024-10-01 13:43:58.654374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.125 [2024-10-01 13:43:58.654460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.125 [2024-10-01 13:43:58.654617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.125 [2024-10-01 13:43:58.654653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.125 [2024-10-01 13:43:58.654673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.125 [2024-10-01 13:43:58.654726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.125 [2024-10-01 13:43:58.654753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.125 [2024-10-01 13:43:58.654770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.125 [2024-10-01 13:43:58.654820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.125 [2024-10-01 13:43:58.654859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.125 [2024-10-01 13:43:58.654902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.125 [2024-10-01 13:43:58.654925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.125 [2024-10-01 13:43:58.654942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.125 [2024-10-01 13:43:58.654959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.125 [2024-10-01 13:43:58.654975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.125 [2024-10-01 13:43:58.654989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.126 [2024-10-01 13:43:58.655259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.126 [2024-10-01 13:43:58.655297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.126 8416.67 IOPS, 32.88 MiB/s [2024-10-01 13:43:58.667431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.126 [2024-10-01 13:43:58.667502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.126 [2024-10-01 13:43:58.668651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.126 [2024-10-01 13:43:58.668701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.126 [2024-10-01 13:43:58.668725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.126 [2024-10-01 13:43:58.668791] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.126 [2024-10-01 13:43:58.668817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.126 [2024-10-01 13:43:58.668834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.126 [2024-10-01 13:43:58.669748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.126 [2024-10-01 13:43:58.669798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.126 [2024-10-01 13:43:58.669985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.126 [2024-10-01 13:43:58.670021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.126 [2024-10-01 13:43:58.670040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.126 [2024-10-01 13:43:58.670059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.126 [2024-10-01 13:43:58.670076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.126 [2024-10-01 13:43:58.670089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.126 [2024-10-01 13:43:58.670204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.126 [2024-10-01 13:43:58.670227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.126 [2024-10-01 13:43:58.679284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.126 [2024-10-01 13:43:58.679375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.126 [2024-10-01 13:43:58.679509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.126 [2024-10-01 13:43:58.679566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.126 [2024-10-01 13:43:58.679587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.126 [2024-10-01 13:43:58.679641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.126 [2024-10-01 13:43:58.679667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.126 [2024-10-01 13:43:58.679684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.126 [2024-10-01 13:43:58.679720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.126 [2024-10-01 13:43:58.679745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.126 [2024-10-01 13:43:58.679772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.126 [2024-10-01 13:43:58.679791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.126 [2024-10-01 13:43:58.679806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.126 [2024-10-01 13:43:58.679824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.126 [2024-10-01 13:43:58.679840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.126 [2024-10-01 13:43:58.679854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.126 [2024-10-01 13:43:58.679900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.126 [2024-10-01 13:43:58.679923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.126 [2024-10-01 13:43:58.690475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.126 [2024-10-01 13:43:58.690586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.126 [2024-10-01 13:43:58.690909] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.126 [2024-10-01 13:43:58.690957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.126 [2024-10-01 13:43:58.690979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.126 [2024-10-01 13:43:58.691034] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.126 [2024-10-01 13:43:58.691060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.126 [2024-10-01 13:43:58.691077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.126 [2024-10-01 13:43:58.691139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.126 [2024-10-01 13:43:58.691170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.126 [2024-10-01 13:43:58.691199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.126 [2024-10-01 13:43:58.691217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.126 [2024-10-01 13:43:58.691233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.126 [2024-10-01 13:43:58.691251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.126 [2024-10-01 13:43:58.691267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.126 [2024-10-01 13:43:58.691281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.126 [2024-10-01 13:43:58.691314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.126 [2024-10-01 13:43:58.691335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.126 [2024-10-01 13:43:58.701099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.126 [2024-10-01 13:43:58.701186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.126 [2024-10-01 13:43:58.701321] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.126 [2024-10-01 13:43:58.701357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.126 [2024-10-01 13:43:58.701376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.126 [2024-10-01 13:43:58.701427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.126 [2024-10-01 13:43:58.701453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.126 [2024-10-01 13:43:58.701470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.126 [2024-10-01 13:43:58.702420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.126 [2024-10-01 13:43:58.702465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.126 [2024-10-01 13:43:58.702679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.126 [2024-10-01 13:43:58.702708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.126 [2024-10-01 13:43:58.702725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.126 [2024-10-01 13:43:58.702744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.126 [2024-10-01 13:43:58.702785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.126 [2024-10-01 13:43:58.702800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.126 [2024-10-01 13:43:58.702922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.126 [2024-10-01 13:43:58.702945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.126 [2024-10-01 13:43:58.712244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.126 [2024-10-01 13:43:58.712338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.126 [2024-10-01 13:43:58.712473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.126 [2024-10-01 13:43:58.712509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.126 [2024-10-01 13:43:58.712529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.126 [2024-10-01 13:43:58.712602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.126 [2024-10-01 13:43:58.712629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.126 [2024-10-01 13:43:58.712646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.126 [2024-10-01 13:43:58.712683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.126 [2024-10-01 13:43:58.712708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.126 [2024-10-01 13:43:58.712979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.126 [2024-10-01 13:43:58.713007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.126 [2024-10-01 13:43:58.713023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.126 [2024-10-01 13:43:58.713041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.126 [2024-10-01 13:43:58.713057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.126 [2024-10-01 13:43:58.713071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.126 [2024-10-01 13:43:58.713222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.126 [2024-10-01 13:43:58.713248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.127 [2024-10-01 13:43:58.722630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.127 [2024-10-01 13:43:58.722726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.127 [2024-10-01 13:43:58.722858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.127 [2024-10-01 13:43:58.722893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.127 [2024-10-01 13:43:58.722912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.127 [2024-10-01 13:43:58.722963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.127 [2024-10-01 13:43:58.722988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.127 [2024-10-01 13:43:58.723004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.127 [2024-10-01 13:43:58.723073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.127 [2024-10-01 13:43:58.723100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.127 [2024-10-01 13:43:58.723128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.127 [2024-10-01 13:43:58.723146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.127 [2024-10-01 13:43:58.723162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.127 [2024-10-01 13:43:58.723179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.127 [2024-10-01 13:43:58.723195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.127 [2024-10-01 13:43:58.723209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.127 [2024-10-01 13:43:58.723241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.127 [2024-10-01 13:43:58.723261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.127 [2024-10-01 13:43:58.734111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.127 [2024-10-01 13:43:58.734247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.127 [2024-10-01 13:43:58.734462] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.127 [2024-10-01 13:43:58.734520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.127 [2024-10-01 13:43:58.734577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.127 [2024-10-01 13:43:58.734658] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.127 [2024-10-01 13:43:58.734687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.127 [2024-10-01 13:43:58.734704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.127 [2024-10-01 13:43:58.734746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.127 [2024-10-01 13:43:58.734771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.127 [2024-10-01 13:43:58.734798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.127 [2024-10-01 13:43:58.734817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.127 [2024-10-01 13:43:58.734833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.127 [2024-10-01 13:43:58.734851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.127 [2024-10-01 13:43:58.734867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.127 [2024-10-01 13:43:58.734881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.127 [2024-10-01 13:43:58.734914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.127 [2024-10-01 13:43:58.734935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.127 [2024-10-01 13:43:58.744794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.127 [2024-10-01 13:43:58.744882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.127 [2024-10-01 13:43:58.745019] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.127 [2024-10-01 13:43:58.745084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.127 [2024-10-01 13:43:58.745106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.127 [2024-10-01 13:43:58.745161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.127 [2024-10-01 13:43:58.745187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.127 [2024-10-01 13:43:58.745203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.127 [2024-10-01 13:43:58.746163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.127 [2024-10-01 13:43:58.746211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.127 [2024-10-01 13:43:58.746416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.127 [2024-10-01 13:43:58.746454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.127 [2024-10-01 13:43:58.746473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.127 [2024-10-01 13:43:58.746492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.127 [2024-10-01 13:43:58.746508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.127 [2024-10-01 13:43:58.746521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.127 [2024-10-01 13:43:58.746653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.127 [2024-10-01 13:43:58.746678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.127 [2024-10-01 13:43:58.755840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.127 [2024-10-01 13:43:58.755928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.127 [2024-10-01 13:43:58.756060] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.127 [2024-10-01 13:43:58.756095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.127 [2024-10-01 13:43:58.756113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.127 [2024-10-01 13:43:58.756166] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.127 [2024-10-01 13:43:58.756191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.127 [2024-10-01 13:43:58.756208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.127 [2024-10-01 13:43:58.756244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.127 [2024-10-01 13:43:58.756268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.127 [2024-10-01 13:43:58.756295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.127 [2024-10-01 13:43:58.756313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.127 [2024-10-01 13:43:58.756329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.127 [2024-10-01 13:43:58.756347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.127 [2024-10-01 13:43:58.756362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.127 [2024-10-01 13:43:58.756401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.127 [2024-10-01 13:43:58.756697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.127 [2024-10-01 13:43:58.756726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.127 [2024-10-01 13:43:58.766368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.127 [2024-10-01 13:43:58.766460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.127 [2024-10-01 13:43:58.766625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.127 [2024-10-01 13:43:58.766661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.127 [2024-10-01 13:43:58.766680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.127 [2024-10-01 13:43:58.766733] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.127 [2024-10-01 13:43:58.766758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.127 [2024-10-01 13:43:58.766775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.127 [2024-10-01 13:43:58.766812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.127 [2024-10-01 13:43:58.766837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.127 [2024-10-01 13:43:58.766864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.127 [2024-10-01 13:43:58.766884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.127 [2024-10-01 13:43:58.766900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.127 [2024-10-01 13:43:58.766918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.127 [2024-10-01 13:43:58.766934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.127 [2024-10-01 13:43:58.766947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.127 [2024-10-01 13:43:58.766980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.127 [2024-10-01 13:43:58.767000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.127 [2024-10-01 13:43:58.777676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.127 [2024-10-01 13:43:58.777739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.127 [2024-10-01 13:43:58.777858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.128 [2024-10-01 13:43:58.777892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.128 [2024-10-01 13:43:58.777911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.128 [2024-10-01 13:43:58.777961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.128 [2024-10-01 13:43:58.777986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.128 [2024-10-01 13:43:58.778002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.128 [2024-10-01 13:43:58.778053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.128 [2024-10-01 13:43:58.778109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.128 [2024-10-01 13:43:58.778142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.128 [2024-10-01 13:43:58.778161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.128 [2024-10-01 13:43:58.778176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.128 [2024-10-01 13:43:58.778194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.128 [2024-10-01 13:43:58.778210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.128 [2024-10-01 13:43:58.778224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.128 [2024-10-01 13:43:58.778257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.128 [2024-10-01 13:43:58.778278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.128 [2024-10-01 13:43:58.788010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.128 [2024-10-01 13:43:58.788061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.128 [2024-10-01 13:43:58.788161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.128 [2024-10-01 13:43:58.788193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.128 [2024-10-01 13:43:58.788211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.128 [2024-10-01 13:43:58.788262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.128 [2024-10-01 13:43:58.788287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.128 [2024-10-01 13:43:58.788303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.128 [2024-10-01 13:43:58.789235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.128 [2024-10-01 13:43:58.789285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.128 [2024-10-01 13:43:58.789486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.128 [2024-10-01 13:43:58.789526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.128 [2024-10-01 13:43:58.789559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.128 [2024-10-01 13:43:58.789579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.128 [2024-10-01 13:43:58.789595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.128 [2024-10-01 13:43:58.789609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.128 [2024-10-01 13:43:58.790877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.128 [2024-10-01 13:43:58.790915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.128 [2024-10-01 13:43:58.799037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.128 [2024-10-01 13:43:58.799114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.128 [2024-10-01 13:43:58.799241] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.128 [2024-10-01 13:43:58.799275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.128 [2024-10-01 13:43:58.799327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.128 [2024-10-01 13:43:58.799382] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.128 [2024-10-01 13:43:58.799408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.128 [2024-10-01 13:43:58.799425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.128 [2024-10-01 13:43:58.799462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.128 [2024-10-01 13:43:58.799486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.128 [2024-10-01 13:43:58.799513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.128 [2024-10-01 13:43:58.799531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.128 [2024-10-01 13:43:58.799565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.128 [2024-10-01 13:43:58.799582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.128 [2024-10-01 13:43:58.799598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.128 [2024-10-01 13:43:58.799611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.128 [2024-10-01 13:43:58.799902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.128 [2024-10-01 13:43:58.799931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.128 [2024-10-01 13:43:58.809398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.128 [2024-10-01 13:43:58.809489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.128 [2024-10-01 13:43:58.809651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.128 [2024-10-01 13:43:58.809686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.128 [2024-10-01 13:43:58.809706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.128 [2024-10-01 13:43:58.809759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.128 [2024-10-01 13:43:58.809784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.128 [2024-10-01 13:43:58.809801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.128 [2024-10-01 13:43:58.809837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.128 [2024-10-01 13:43:58.809861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.128 [2024-10-01 13:43:58.809889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.128 [2024-10-01 13:43:58.809908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.128 [2024-10-01 13:43:58.809924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.128 [2024-10-01 13:43:58.809941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.128 [2024-10-01 13:43:58.809956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.128 [2024-10-01 13:43:58.809970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.128 [2024-10-01 13:43:58.810029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.128 [2024-10-01 13:43:58.810052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.128 [2024-10-01 13:43:58.820813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.128 [2024-10-01 13:43:58.820865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.128 [2024-10-01 13:43:58.821128] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.128 [2024-10-01 13:43:58.821173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.128 [2024-10-01 13:43:58.821194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.128 [2024-10-01 13:43:58.821247] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.128 [2024-10-01 13:43:58.821273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.128 [2024-10-01 13:43:58.821289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.128 [2024-10-01 13:43:58.821332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.128 [2024-10-01 13:43:58.821357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.128 [2024-10-01 13:43:58.821385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.128 [2024-10-01 13:43:58.821403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.128 [2024-10-01 13:43:58.821417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.128 [2024-10-01 13:43:58.821435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.128 [2024-10-01 13:43:58.821450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.128 [2024-10-01 13:43:58.821464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.128 [2024-10-01 13:43:58.821500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.128 [2024-10-01 13:43:58.821520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.128 [2024-10-01 13:43:58.830951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.128 [2024-10-01 13:43:58.831002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.128 [2024-10-01 13:43:58.831100] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.128 [2024-10-01 13:43:58.831132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.128 [2024-10-01 13:43:58.831150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.128 [2024-10-01 13:43:58.831200] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.129 [2024-10-01 13:43:58.831225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.129 [2024-10-01 13:43:58.831242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.129 [2024-10-01 13:43:58.832507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.129 [2024-10-01 13:43:58.832562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.129 [2024-10-01 13:43:58.832808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.129 [2024-10-01 13:43:58.832847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.129 [2024-10-01 13:43:58.832866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.129 [2024-10-01 13:43:58.832884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.129 [2024-10-01 13:43:58.832900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.129 [2024-10-01 13:43:58.832913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.129 [2024-10-01 13:43:58.833849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.129 [2024-10-01 13:43:58.833888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.129 [2024-10-01 13:43:58.841893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.129 [2024-10-01 13:43:58.841947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.129 [2024-10-01 13:43:58.842217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.129 [2024-10-01 13:43:58.842261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.129 [2024-10-01 13:43:58.842282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.129 [2024-10-01 13:43:58.842334] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.129 [2024-10-01 13:43:58.842360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.129 [2024-10-01 13:43:58.842377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.129 [2024-10-01 13:43:58.843389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.129 [2024-10-01 13:43:58.843434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.129 [2024-10-01 13:43:58.844094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.129 [2024-10-01 13:43:58.844141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.129 [2024-10-01 13:43:58.844159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.129 [2024-10-01 13:43:58.844177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.129 [2024-10-01 13:43:58.844193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.129 [2024-10-01 13:43:58.844207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.129 [2024-10-01 13:43:58.844528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.129 [2024-10-01 13:43:58.844579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.129 [2024-10-01 13:43:58.854014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.129 [2024-10-01 13:43:58.854064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.129 [2024-10-01 13:43:58.854168] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.129 [2024-10-01 13:43:58.854200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.129 [2024-10-01 13:43:58.854219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.129 [2024-10-01 13:43:58.854291] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.129 [2024-10-01 13:43:58.854318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.129 [2024-10-01 13:43:58.854335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.129 [2024-10-01 13:43:58.854381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.129 [2024-10-01 13:43:58.854406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.129 [2024-10-01 13:43:58.854433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.129 [2024-10-01 13:43:58.854451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.129 [2024-10-01 13:43:58.854466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.129 [2024-10-01 13:43:58.854483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.129 [2024-10-01 13:43:58.854498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.129 [2024-10-01 13:43:58.854512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.129 [2024-10-01 13:43:58.854561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.129 [2024-10-01 13:43:58.854584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.129 [2024-10-01 13:43:58.865232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.129 [2024-10-01 13:43:58.865284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.129 [2024-10-01 13:43:58.865390] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.129 [2024-10-01 13:43:58.865423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.129 [2024-10-01 13:43:58.865441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.129 [2024-10-01 13:43:58.865492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.129 [2024-10-01 13:43:58.865517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.129 [2024-10-01 13:43:58.865549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.129 [2024-10-01 13:43:58.865589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.129 [2024-10-01 13:43:58.865613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.129 [2024-10-01 13:43:58.865658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.129 [2024-10-01 13:43:58.865681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.129 [2024-10-01 13:43:58.865695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.129 [2024-10-01 13:43:58.865713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.129 [2024-10-01 13:43:58.865729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.129 [2024-10-01 13:43:58.865743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.129 [2024-10-01 13:43:58.865775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.129 [2024-10-01 13:43:58.865808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.129 [2024-10-01 13:43:58.875464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.129 [2024-10-01 13:43:58.875516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.129 [2024-10-01 13:43:58.875629] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.129 [2024-10-01 13:43:58.875661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.129 [2024-10-01 13:43:58.875679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.129 [2024-10-01 13:43:58.875729] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.129 [2024-10-01 13:43:58.875754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.129 [2024-10-01 13:43:58.875770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.129 [2024-10-01 13:43:58.876709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.129 [2024-10-01 13:43:58.876754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.129 [2024-10-01 13:43:58.876942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.129 [2024-10-01 13:43:58.876970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.129 [2024-10-01 13:43:58.876986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.129 [2024-10-01 13:43:58.877004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.129 [2024-10-01 13:43:58.877019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.129 [2024-10-01 13:43:58.877032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.129 [2024-10-01 13:43:58.878301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.129 [2024-10-01 13:43:58.878340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.129 [2024-10-01 13:43:58.886318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.129 [2024-10-01 13:43:58.886368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.129 [2024-10-01 13:43:58.886467] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.129 [2024-10-01 13:43:58.886499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.129 [2024-10-01 13:43:58.886517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.129 [2024-10-01 13:43:58.886586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.129 [2024-10-01 13:43:58.886614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.129 [2024-10-01 13:43:58.886630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.130 [2024-10-01 13:43:58.886665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.130 [2024-10-01 13:43:58.886689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.130 [2024-10-01 13:43:58.886722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.130 [2024-10-01 13:43:58.886740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.130 [2024-10-01 13:43:58.886772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.130 [2024-10-01 13:43:58.886791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.130 [2024-10-01 13:43:58.886808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.130 [2024-10-01 13:43:58.886821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.130 [2024-10-01 13:43:58.887085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.130 [2024-10-01 13:43:58.887113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.130 [2024-10-01 13:43:58.896517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.130 [2024-10-01 13:43:58.896582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.130 [2024-10-01 13:43:58.896680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.130 [2024-10-01 13:43:58.896712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.130 [2024-10-01 13:43:58.896730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.130 [2024-10-01 13:43:58.896786] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.130 [2024-10-01 13:43:58.896812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.130 [2024-10-01 13:43:58.896828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.130 [2024-10-01 13:43:58.896861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.130 [2024-10-01 13:43:58.896885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.130 [2024-10-01 13:43:58.896912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.130 [2024-10-01 13:43:58.896931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.130 [2024-10-01 13:43:58.896945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.130 [2024-10-01 13:43:58.896962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.130 [2024-10-01 13:43:58.896977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.130 [2024-10-01 13:43:58.896991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.130 [2024-10-01 13:43:58.897022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.130 [2024-10-01 13:43:58.897042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.130 [2024-10-01 13:43:58.907695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.130 [2024-10-01 13:43:58.907745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.130 [2024-10-01 13:43:58.907844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.130 [2024-10-01 13:43:58.907888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.130 [2024-10-01 13:43:58.907909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.130 [2024-10-01 13:43:58.907960] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.130 [2024-10-01 13:43:58.908002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.130 [2024-10-01 13:43:58.908022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.130 [2024-10-01 13:43:58.908057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.130 [2024-10-01 13:43:58.908081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.130 [2024-10-01 13:43:58.908108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.130 [2024-10-01 13:43:58.908126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.130 [2024-10-01 13:43:58.908140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.130 [2024-10-01 13:43:58.908157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.130 [2024-10-01 13:43:58.908177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.130 [2024-10-01 13:43:58.908190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.130 [2024-10-01 13:43:58.908222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.130 [2024-10-01 13:43:58.908242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.130 [2024-10-01 13:43:58.919681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.130 [2024-10-01 13:43:58.919783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.130 [2024-10-01 13:43:58.921955] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.130 [2024-10-01 13:43:58.922031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.130 [2024-10-01 13:43:58.922075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.130 [2024-10-01 13:43:58.922177] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.130 [2024-10-01 13:43:58.922223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.130 [2024-10-01 13:43:58.922258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.130 [2024-10-01 13:43:58.923473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.130 [2024-10-01 13:43:58.923570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.130 [2024-10-01 13:43:58.925417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.130 [2024-10-01 13:43:58.925467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.130 [2024-10-01 13:43:58.925489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.130 [2024-10-01 13:43:58.925515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.130 [2024-10-01 13:43:58.925531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.130 [2024-10-01 13:43:58.925572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.130 [2024-10-01 13:43:58.926435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.130 [2024-10-01 13:43:58.926477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.130 [2024-10-01 13:43:58.929892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.130 [2024-10-01 13:43:58.929973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.130 [2024-10-01 13:43:58.930064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.130 [2024-10-01 13:43:58.930095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.130 [2024-10-01 13:43:58.930113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.130 [2024-10-01 13:43:58.930182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.130 [2024-10-01 13:43:58.930210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.130 [2024-10-01 13:43:58.930227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.130 [2024-10-01 13:43:58.930246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.130 [2024-10-01 13:43:58.930279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.130 [2024-10-01 13:43:58.930301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.130 [2024-10-01 13:43:58.930315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.130 [2024-10-01 13:43:58.930330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.130 [2024-10-01 13:43:58.930362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.130 [2024-10-01 13:43:58.930382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.130 [2024-10-01 13:43:58.930396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.130 [2024-10-01 13:43:58.930410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.131 [2024-10-01 13:43:58.930440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.131 [2024-10-01 13:43:58.940010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.131 [2024-10-01 13:43:58.940146] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.131 [2024-10-01 13:43:58.940181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.131 [2024-10-01 13:43:58.940199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.131 [2024-10-01 13:43:58.940235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.131 [2024-10-01 13:43:58.940270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.131 [2024-10-01 13:43:58.940342] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.131 [2024-10-01 13:43:58.940371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.131 [2024-10-01 13:43:58.940388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.131 [2024-10-01 13:43:58.940404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.131 [2024-10-01 13:43:58.940418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.131 [2024-10-01 13:43:58.940433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.131 [2024-10-01 13:43:58.941252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.131 [2024-10-01 13:43:58.941313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.131 [2024-10-01 13:43:58.941512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.131 [2024-10-01 13:43:58.941560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.131 [2024-10-01 13:43:58.941578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.131 [2024-10-01 13:43:58.942580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.131 [2024-10-01 13:43:58.950174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.131 [2024-10-01 13:43:58.950294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.131 [2024-10-01 13:43:58.950327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.131 [2024-10-01 13:43:58.950346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.131 [2024-10-01 13:43:58.950379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.131 [2024-10-01 13:43:58.950427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.131 [2024-10-01 13:43:58.950449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.131 [2024-10-01 13:43:58.950463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.131 [2024-10-01 13:43:58.950497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.131 [2024-10-01 13:43:58.950522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.131 [2024-10-01 13:43:58.950620] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.131 [2024-10-01 13:43:58.950650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.131 [2024-10-01 13:43:58.950668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.131 [2024-10-01 13:43:58.950701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.131 [2024-10-01 13:43:58.950733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.131 [2024-10-01 13:43:58.950751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.131 [2024-10-01 13:43:58.950765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.131 [2024-10-01 13:43:58.950797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.131 [2024-10-01 13:43:58.960383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.131 [2024-10-01 13:43:58.960504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.131 [2024-10-01 13:43:58.960551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.131 [2024-10-01 13:43:58.960573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.131 [2024-10-01 13:43:58.960608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.131 [2024-10-01 13:43:58.961531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.131 [2024-10-01 13:43:58.961583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.131 [2024-10-01 13:43:58.961621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.131 [2024-10-01 13:43:58.961841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.131 [2024-10-01 13:43:58.961875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.131 [2024-10-01 13:43:58.962002] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.131 [2024-10-01 13:43:58.962034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.131 [2024-10-01 13:43:58.962053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.131 [2024-10-01 13:43:58.963285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.131 [2024-10-01 13:43:58.964194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.131 [2024-10-01 13:43:58.964235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.131 [2024-10-01 13:43:58.964253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.131 [2024-10-01 13:43:58.964479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.131 [2024-10-01 13:43:58.971234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.131 [2024-10-01 13:43:58.971354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.131 [2024-10-01 13:43:58.971386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.131 [2024-10-01 13:43:58.971404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.131 [2024-10-01 13:43:58.971437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.131 [2024-10-01 13:43:58.971469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.131 [2024-10-01 13:43:58.971486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.131 [2024-10-01 13:43:58.971501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.131 [2024-10-01 13:43:58.971532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.131 [2024-10-01 13:43:58.972192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.131 [2024-10-01 13:43:58.972311] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.131 [2024-10-01 13:43:58.972345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.131 [2024-10-01 13:43:58.972364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.131 [2024-10-01 13:43:58.972397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.131 [2024-10-01 13:43:58.972434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.131 [2024-10-01 13:43:58.972451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.131 [2024-10-01 13:43:58.972466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.131 [2024-10-01 13:43:58.972498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.131 [2024-10-01 13:43:58.981378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.131 [2024-10-01 13:43:58.981495] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.131 [2024-10-01 13:43:58.981558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.131 [2024-10-01 13:43:58.981582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.131 [2024-10-01 13:43:58.981617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.131 [2024-10-01 13:43:58.981650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.131 [2024-10-01 13:43:58.981667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.131 [2024-10-01 13:43:58.981681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.131 [2024-10-01 13:43:58.981713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.131 [2024-10-01 13:43:58.982282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.131 [2024-10-01 13:43:58.982385] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.131 [2024-10-01 13:43:58.982417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.131 [2024-10-01 13:43:58.982434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.131 [2024-10-01 13:43:58.982467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.131 [2024-10-01 13:43:58.982500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.131 [2024-10-01 13:43:58.982518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.131 [2024-10-01 13:43:58.982532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.131 [2024-10-01 13:43:58.982581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.131 [2024-10-01 13:43:58.992611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.131 [2024-10-01 13:43:58.992689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.132 [2024-10-01 13:43:58.992773] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.132 [2024-10-01 13:43:58.992803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.132 [2024-10-01 13:43:58.992821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.132 [2024-10-01 13:43:58.992890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.132 [2024-10-01 13:43:58.992918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.132 [2024-10-01 13:43:58.992935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.132 [2024-10-01 13:43:58.992954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.132 [2024-10-01 13:43:58.992987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.132 [2024-10-01 13:43:58.993008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.132 [2024-10-01 13:43:58.993022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.132 [2024-10-01 13:43:58.993037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.132 [2024-10-01 13:43:58.993069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.132 [2024-10-01 13:43:58.993103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.132 [2024-10-01 13:43:58.993120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.132 [2024-10-01 13:43:58.993135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.132 [2024-10-01 13:43:58.993166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.132 [2024-10-01 13:43:59.003032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.132 [2024-10-01 13:43:59.003090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.132 [2024-10-01 13:43:59.003193] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.132 [2024-10-01 13:43:59.003225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.132 [2024-10-01 13:43:59.003243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.132 [2024-10-01 13:43:59.003294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.132 [2024-10-01 13:43:59.003319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.132 [2024-10-01 13:43:59.003336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.132 [2024-10-01 13:43:59.003369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.132 [2024-10-01 13:43:59.003393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.132 [2024-10-01 13:43:59.003420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.132 [2024-10-01 13:43:59.003439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.132 [2024-10-01 13:43:59.003454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.132 [2024-10-01 13:43:59.003471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.132 [2024-10-01 13:43:59.003487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.132 [2024-10-01 13:43:59.003501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.132 [2024-10-01 13:43:59.004442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.132 [2024-10-01 13:43:59.004483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.132 [2024-10-01 13:43:59.013251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.132 [2024-10-01 13:43:59.013306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.132 [2024-10-01 13:43:59.013426] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.132 [2024-10-01 13:43:59.013459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.132 [2024-10-01 13:43:59.013478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.132 [2024-10-01 13:43:59.013529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.132 [2024-10-01 13:43:59.013572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.132 [2024-10-01 13:43:59.013590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.132 [2024-10-01 13:43:59.014531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.132 [2024-10-01 13:43:59.014591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.132 [2024-10-01 13:43:59.015198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.132 [2024-10-01 13:43:59.015237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.132 [2024-10-01 13:43:59.015256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.132 [2024-10-01 13:43:59.015274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.132 [2024-10-01 13:43:59.015290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.132 [2024-10-01 13:43:59.015304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.132 [2024-10-01 13:43:59.015393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.132 [2024-10-01 13:43:59.015419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.132 [2024-10-01 13:43:59.023390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.132 [2024-10-01 13:43:59.023465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.132 [2024-10-01 13:43:59.023566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.132 [2024-10-01 13:43:59.023599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.132 [2024-10-01 13:43:59.023617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.132 [2024-10-01 13:43:59.024883] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.132 [2024-10-01 13:43:59.024928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.132 [2024-10-01 13:43:59.024949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.132 [2024-10-01 13:43:59.024969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.132 [2024-10-01 13:43:59.025825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.132 [2024-10-01 13:43:59.025867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.132 [2024-10-01 13:43:59.025886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.132 [2024-10-01 13:43:59.025900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.132 [2024-10-01 13:43:59.026014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.132 [2024-10-01 13:43:59.026041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.132 [2024-10-01 13:43:59.026057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.132 [2024-10-01 13:43:59.026072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.132 [2024-10-01 13:43:59.026105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.132 [2024-10-01 13:43:59.033489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.132 [2024-10-01 13:43:59.034574] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.132 [2024-10-01 13:43:59.034622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.132 [2024-10-01 13:43:59.034668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.132 [2024-10-01 13:43:59.034893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.132 [2024-10-01 13:43:59.036238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.132 [2024-10-01 13:43:59.036293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.132 [2024-10-01 13:43:59.036315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.132 [2024-10-01 13:43:59.036330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.132 [2024-10-01 13:43:59.037206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.132 [2024-10-01 13:43:59.037293] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.132 [2024-10-01 13:43:59.037323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.132 [2024-10-01 13:43:59.037341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.132 [2024-10-01 13:43:59.037576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.132 [2024-10-01 13:43:59.037626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.132 [2024-10-01 13:43:59.037646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.132 [2024-10-01 13:43:59.037660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.132 [2024-10-01 13:43:59.037693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.132 [2024-10-01 13:43:59.044438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.132 [2024-10-01 13:43:59.044573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.132 [2024-10-01 13:43:59.044607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.132 [2024-10-01 13:43:59.044625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.132 [2024-10-01 13:43:59.044661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.133 [2024-10-01 13:43:59.044694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.133 [2024-10-01 13:43:59.044712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.133 [2024-10-01 13:43:59.044726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.133 [2024-10-01 13:43:59.044758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.133 [2024-10-01 13:43:59.046333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.133 [2024-10-01 13:43:59.046445] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.133 [2024-10-01 13:43:59.046477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.133 [2024-10-01 13:43:59.046495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.133 [2024-10-01 13:43:59.046528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.133 [2024-10-01 13:43:59.046578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.133 [2024-10-01 13:43:59.046613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.133 [2024-10-01 13:43:59.046628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.133 [2024-10-01 13:43:59.046661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.133 [2024-10-01 13:43:59.055193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.133 [2024-10-01 13:43:59.055314] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.133 [2024-10-01 13:43:59.055348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.133 [2024-10-01 13:43:59.055367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.133 [2024-10-01 13:43:59.055400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.133 [2024-10-01 13:43:59.055432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.133 [2024-10-01 13:43:59.055450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.133 [2024-10-01 13:43:59.055464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.133 [2024-10-01 13:43:59.055496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.133 [2024-10-01 13:43:59.056420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.133 [2024-10-01 13:43:59.056515] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.133 [2024-10-01 13:43:59.056559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.133 [2024-10-01 13:43:59.056580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.133 [2024-10-01 13:43:59.056614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.133 [2024-10-01 13:43:59.057413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.133 [2024-10-01 13:43:59.057453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.133 [2024-10-01 13:43:59.057471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.133 [2024-10-01 13:43:59.057676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.133 [2024-10-01 13:43:59.066462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.133 [2024-10-01 13:43:59.066618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.133 [2024-10-01 13:43:59.066663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.133 [2024-10-01 13:43:59.066685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.133 [2024-10-01 13:43:59.066722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.133 [2024-10-01 13:43:59.066758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.133 [2024-10-01 13:43:59.066832] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.133 [2024-10-01 13:43:59.066861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.133 [2024-10-01 13:43:59.066878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.133 [2024-10-01 13:43:59.066913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.133 [2024-10-01 13:43:59.066930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.133 [2024-10-01 13:43:59.066944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.133 [2024-10-01 13:43:59.066979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.133 [2024-10-01 13:43:59.067002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.133 [2024-10-01 13:43:59.067032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.133 [2024-10-01 13:43:59.067050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.133 [2024-10-01 13:43:59.067064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.133 [2024-10-01 13:43:59.067094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.133 [2024-10-01 13:43:59.076701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.133 [2024-10-01 13:43:59.076819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.133 [2024-10-01 13:43:59.076858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.133 [2024-10-01 13:43:59.076877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.133 [2024-10-01 13:43:59.076941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.133 [2024-10-01 13:43:59.077883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.133 [2024-10-01 13:43:59.077936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.133 [2024-10-01 13:43:59.077957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.133 [2024-10-01 13:43:59.077971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.133 [2024-10-01 13:43:59.078172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.133 [2024-10-01 13:43:59.078254] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.133 [2024-10-01 13:43:59.078284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.133 [2024-10-01 13:43:59.078302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.133 [2024-10-01 13:43:59.079592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.133 [2024-10-01 13:43:59.080520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.133 [2024-10-01 13:43:59.080576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.133 [2024-10-01 13:43:59.080596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.133 [2024-10-01 13:43:59.080771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.133 [2024-10-01 13:43:59.087635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.133 [2024-10-01 13:43:59.087752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.133 [2024-10-01 13:43:59.087786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.133 [2024-10-01 13:43:59.087805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.133 [2024-10-01 13:43:59.087856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.133 [2024-10-01 13:43:59.087903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.133 [2024-10-01 13:43:59.087923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.133 [2024-10-01 13:43:59.087937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.133 [2024-10-01 13:43:59.087969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.133 [2024-10-01 13:43:59.088027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.133 [2024-10-01 13:43:59.088118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.133 [2024-10-01 13:43:59.088147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.133 [2024-10-01 13:43:59.088165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.133 [2024-10-01 13:43:59.088432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.133 [2024-10-01 13:43:59.088614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.133 [2024-10-01 13:43:59.088650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.133 [2024-10-01 13:43:59.088667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.133 [2024-10-01 13:43:59.088779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.133 [2024-10-01 13:43:59.097823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.133 [2024-10-01 13:43:59.097959] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.133 [2024-10-01 13:43:59.098008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.133 [2024-10-01 13:43:59.098029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.133 [2024-10-01 13:43:59.098063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.133 [2024-10-01 13:43:59.098099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.133 [2024-10-01 13:43:59.098119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.133 [2024-10-01 13:43:59.098134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.134 [2024-10-01 13:43:59.098176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.134 [2024-10-01 13:43:59.098214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.134 [2024-10-01 13:43:59.098298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.134 [2024-10-01 13:43:59.098327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.134 [2024-10-01 13:43:59.098344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.134 [2024-10-01 13:43:59.098376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.134 [2024-10-01 13:43:59.098408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.134 [2024-10-01 13:43:59.098426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.134 [2024-10-01 13:43:59.098456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.134 [2024-10-01 13:43:59.098490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.134 [2024-10-01 13:43:59.109331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.134 [2024-10-01 13:43:59.109384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.134 [2024-10-01 13:43:59.109494] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.134 [2024-10-01 13:43:59.109526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.134 [2024-10-01 13:43:59.109561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.134 [2024-10-01 13:43:59.109615] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.134 [2024-10-01 13:43:59.109641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.134 [2024-10-01 13:43:59.109658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.134 [2024-10-01 13:43:59.109692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.134 [2024-10-01 13:43:59.109716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.134 [2024-10-01 13:43:59.109743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.134 [2024-10-01 13:43:59.109761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.134 [2024-10-01 13:43:59.109776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.134 [2024-10-01 13:43:59.109793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.134 [2024-10-01 13:43:59.109809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.134 [2024-10-01 13:43:59.109823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.134 [2024-10-01 13:43:59.109854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.134 [2024-10-01 13:43:59.109874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.134 [2024-10-01 13:43:59.119704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.134 [2024-10-01 13:43:59.119755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.134 [2024-10-01 13:43:59.119854] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.134 [2024-10-01 13:43:59.119897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.134 [2024-10-01 13:43:59.119917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.134 [2024-10-01 13:43:59.119968] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.134 [2024-10-01 13:43:59.119993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.134 [2024-10-01 13:43:59.120010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.134 [2024-10-01 13:43:59.120957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.134 [2024-10-01 13:43:59.121004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.134 [2024-10-01 13:43:59.121229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.134 [2024-10-01 13:43:59.121267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.134 [2024-10-01 13:43:59.121285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.134 [2024-10-01 13:43:59.121303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.134 [2024-10-01 13:43:59.121319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.134 [2024-10-01 13:43:59.121333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.134 [2024-10-01 13:43:59.121411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.134 [2024-10-01 13:43:59.121434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.134 [2024-10-01 13:43:59.130743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.134 [2024-10-01 13:43:59.130796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.134 [2024-10-01 13:43:59.130899] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.134 [2024-10-01 13:43:59.130931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.134 [2024-10-01 13:43:59.130950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.134 [2024-10-01 13:43:59.131000] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.134 [2024-10-01 13:43:59.131026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.134 [2024-10-01 13:43:59.131043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.134 [2024-10-01 13:43:59.131076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.134 [2024-10-01 13:43:59.131099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.134 [2024-10-01 13:43:59.131126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.134 [2024-10-01 13:43:59.131144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.134 [2024-10-01 13:43:59.131159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.134 [2024-10-01 13:43:59.131175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.134 [2024-10-01 13:43:59.131191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.134 [2024-10-01 13:43:59.131205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.134 [2024-10-01 13:43:59.131466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.134 [2024-10-01 13:43:59.131493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.134 [2024-10-01 13:43:59.140895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.134 [2024-10-01 13:43:59.140971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.134 [2024-10-01 13:43:59.141054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.134 [2024-10-01 13:43:59.141084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.134 [2024-10-01 13:43:59.141102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.134 [2024-10-01 13:43:59.141190] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.134 [2024-10-01 13:43:59.141219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.134 [2024-10-01 13:43:59.141236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.134 [2024-10-01 13:43:59.141256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.134 [2024-10-01 13:43:59.141289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.134 [2024-10-01 13:43:59.141310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.134 [2024-10-01 13:43:59.141325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.134 [2024-10-01 13:43:59.141340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.134 [2024-10-01 13:43:59.141372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.134 [2024-10-01 13:43:59.141393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.134 [2024-10-01 13:43:59.141407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.134 [2024-10-01 13:43:59.141421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.134 [2024-10-01 13:43:59.141451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.135 [2024-10-01 13:43:59.152298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.135 [2024-10-01 13:43:59.152380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.135 [2024-10-01 13:43:59.152516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.135 [2024-10-01 13:43:59.152567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.135 [2024-10-01 13:43:59.152588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.135 [2024-10-01 13:43:59.152643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.135 [2024-10-01 13:43:59.152669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.135 [2024-10-01 13:43:59.152685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.135 [2024-10-01 13:43:59.152722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.135 [2024-10-01 13:43:59.152747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.135 [2024-10-01 13:43:59.152774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.135 [2024-10-01 13:43:59.152793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.135 [2024-10-01 13:43:59.152809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.135 [2024-10-01 13:43:59.152826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.135 [2024-10-01 13:43:59.152841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.135 [2024-10-01 13:43:59.152855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.135 [2024-10-01 13:43:59.152887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.135 [2024-10-01 13:43:59.152927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.135 [2024-10-01 13:43:59.162651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.135 [2024-10-01 13:43:59.162705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.135 [2024-10-01 13:43:59.162807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.135 [2024-10-01 13:43:59.162840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.135 [2024-10-01 13:43:59.162859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.135 [2024-10-01 13:43:59.162911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.135 [2024-10-01 13:43:59.162936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.135 [2024-10-01 13:43:59.162954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.135 [2024-10-01 13:43:59.163896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.135 [2024-10-01 13:43:59.163942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.135 [2024-10-01 13:43:59.164158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.135 [2024-10-01 13:43:59.164195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.135 [2024-10-01 13:43:59.164213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.135 [2024-10-01 13:43:59.164231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.135 [2024-10-01 13:43:59.164247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.135 [2024-10-01 13:43:59.164261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.135 [2024-10-01 13:43:59.165547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.135 [2024-10-01 13:43:59.165585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.135 [2024-10-01 13:43:59.173664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.135 [2024-10-01 13:43:59.173716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.135 [2024-10-01 13:43:59.173816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.135 [2024-10-01 13:43:59.173849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.135 [2024-10-01 13:43:59.173867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.135 [2024-10-01 13:43:59.173918] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.135 [2024-10-01 13:43:59.173943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.135 [2024-10-01 13:43:59.173960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.135 [2024-10-01 13:43:59.173993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.135 [2024-10-01 13:43:59.174016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.135 [2024-10-01 13:43:59.174043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.135 [2024-10-01 13:43:59.174083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.135 [2024-10-01 13:43:59.174100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.135 [2024-10-01 13:43:59.174117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.135 [2024-10-01 13:43:59.174133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.135 [2024-10-01 13:43:59.174147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.135 [2024-10-01 13:43:59.174411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.135 [2024-10-01 13:43:59.174438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.135 [2024-10-01 13:43:59.183824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.135 [2024-10-01 13:43:59.183883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.135 [2024-10-01 13:43:59.183984] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.135 [2024-10-01 13:43:59.184016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.135 [2024-10-01 13:43:59.184035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.135 [2024-10-01 13:43:59.184086] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.135 [2024-10-01 13:43:59.184111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.135 [2024-10-01 13:43:59.184127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.135 [2024-10-01 13:43:59.184161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.135 [2024-10-01 13:43:59.184185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.135 [2024-10-01 13:43:59.184212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.135 [2024-10-01 13:43:59.184230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.135 [2024-10-01 13:43:59.184245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.135 [2024-10-01 13:43:59.184261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.135 [2024-10-01 13:43:59.184277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.135 [2024-10-01 13:43:59.184291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.135 [2024-10-01 13:43:59.184322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.135 [2024-10-01 13:43:59.184342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.135 [2024-10-01 13:43:59.194993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.135 [2024-10-01 13:43:59.195048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.135 [2024-10-01 13:43:59.195150] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.135 [2024-10-01 13:43:59.195183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.135 [2024-10-01 13:43:59.195202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.135 [2024-10-01 13:43:59.195253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.135 [2024-10-01 13:43:59.195294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.135 [2024-10-01 13:43:59.195314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.135 [2024-10-01 13:43:59.195348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.135 [2024-10-01 13:43:59.195373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.135 [2024-10-01 13:43:59.195399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.135 [2024-10-01 13:43:59.195417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.135 [2024-10-01 13:43:59.195432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.135 [2024-10-01 13:43:59.195449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.135 [2024-10-01 13:43:59.195464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.135 [2024-10-01 13:43:59.195478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.135 [2024-10-01 13:43:59.195510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.135 [2024-10-01 13:43:59.195529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.135 [2024-10-01 13:43:59.206615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.135 [2024-10-01 13:43:59.206685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.136 [2024-10-01 13:43:59.208363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.136 [2024-10-01 13:43:59.208417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.136 [2024-10-01 13:43:59.208451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.136 [2024-10-01 13:43:59.208531] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.136 [2024-10-01 13:43:59.208579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.136 [2024-10-01 13:43:59.208597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.136 [2024-10-01 13:43:59.209499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.136 [2024-10-01 13:43:59.209568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.136 [2024-10-01 13:43:59.209799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.136 [2024-10-01 13:43:59.209840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.136 [2024-10-01 13:43:59.209861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.136 [2024-10-01 13:43:59.209880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.136 [2024-10-01 13:43:59.209896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.136 [2024-10-01 13:43:59.209910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.136 [2024-10-01 13:43:59.211245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.136 [2024-10-01 13:43:59.211295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.136 [2024-10-01 13:43:59.216841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.136 [2024-10-01 13:43:59.216995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.136 [2024-10-01 13:43:59.217167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.136 [2024-10-01 13:43:59.217220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.136 [2024-10-01 13:43:59.217256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.136 [2024-10-01 13:43:59.217388] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.136 [2024-10-01 13:43:59.217434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.136 [2024-10-01 13:43:59.217468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.136 [2024-10-01 13:43:59.217507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.136 [2024-10-01 13:43:59.217905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.136 [2024-10-01 13:43:59.217970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.136 [2024-10-01 13:43:59.218001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.136 [2024-10-01 13:43:59.218027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.136 [2024-10-01 13:43:59.218236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.136 [2024-10-01 13:43:59.218293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.136 [2024-10-01 13:43:59.218322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.136 [2024-10-01 13:43:59.218350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.136 [2024-10-01 13:43:59.218521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.136 [2024-10-01 13:43:59.227019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.136 [2024-10-01 13:43:59.227224] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.136 [2024-10-01 13:43:59.227262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.136 [2024-10-01 13:43:59.227282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.136 [2024-10-01 13:43:59.227333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.136 [2024-10-01 13:43:59.227376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.136 [2024-10-01 13:43:59.227411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.136 [2024-10-01 13:43:59.227429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.136 [2024-10-01 13:43:59.227445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.136 [2024-10-01 13:43:59.228710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.136 [2024-10-01 13:43:59.228806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.136 [2024-10-01 13:43:59.228838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.136 [2024-10-01 13:43:59.228857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.136 [2024-10-01 13:43:59.229796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.136 [2024-10-01 13:43:59.229937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.136 [2024-10-01 13:43:59.229965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.136 [2024-10-01 13:43:59.229981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.136 [2024-10-01 13:43:59.230017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.136 [2024-10-01 13:43:59.237171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.136 [2024-10-01 13:43:59.237304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.136 [2024-10-01 13:43:59.237337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.136 [2024-10-01 13:43:59.237356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.136 [2024-10-01 13:43:59.237391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.136 [2024-10-01 13:43:59.237423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.136 [2024-10-01 13:43:59.237440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.136 [2024-10-01 13:43:59.237454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.136 [2024-10-01 13:43:59.237500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.136 [2024-10-01 13:43:59.237554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.136 [2024-10-01 13:43:59.237647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.136 [2024-10-01 13:43:59.237678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.136 [2024-10-01 13:43:59.237696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.136 [2024-10-01 13:43:59.237729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.136 [2024-10-01 13:43:59.237763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.136 [2024-10-01 13:43:59.237781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.136 [2024-10-01 13:43:59.237795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.136 [2024-10-01 13:43:59.239058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.136 [2024-10-01 13:43:59.248067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.136 [2024-10-01 13:43:59.248122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.136 [2024-10-01 13:43:59.248228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.136 [2024-10-01 13:43:59.248262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.136 [2024-10-01 13:43:59.248281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.136 [2024-10-01 13:43:59.248337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.136 [2024-10-01 13:43:59.248363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.136 [2024-10-01 13:43:59.248398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.136 [2024-10-01 13:43:59.248435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.136 [2024-10-01 13:43:59.248459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.136 [2024-10-01 13:43:59.248486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.136 [2024-10-01 13:43:59.248505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.136 [2024-10-01 13:43:59.248519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.136 [2024-10-01 13:43:59.248554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.136 [2024-10-01 13:43:59.248574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.136 [2024-10-01 13:43:59.248589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.136 [2024-10-01 13:43:59.248622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.136 [2024-10-01 13:43:59.248642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.136 [2024-10-01 13:43:59.258211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.136 [2024-10-01 13:43:59.258267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.136 [2024-10-01 13:43:59.258375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.136 [2024-10-01 13:43:59.258408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.136 [2024-10-01 13:43:59.258427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.136 [2024-10-01 13:43:59.258478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.137 [2024-10-01 13:43:59.258503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.137 [2024-10-01 13:43:59.258520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.137 [2024-10-01 13:43:59.259463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.137 [2024-10-01 13:43:59.259513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.137 [2024-10-01 13:43:59.259749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.137 [2024-10-01 13:43:59.259780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.137 [2024-10-01 13:43:59.259797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.137 [2024-10-01 13:43:59.259816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.137 [2024-10-01 13:43:59.259831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.137 [2024-10-01 13:43:59.259845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.137 [2024-10-01 13:43:59.259954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.137 [2024-10-01 13:43:59.259981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.137 [2024-10-01 13:43:59.268348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.137 [2024-10-01 13:43:59.268445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.137 [2024-10-01 13:43:59.268551] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.137 [2024-10-01 13:43:59.268584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.137 [2024-10-01 13:43:59.268602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.137 [2024-10-01 13:43:59.268673] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.137 [2024-10-01 13:43:59.268701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.137 [2024-10-01 13:43:59.268718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.137 [2024-10-01 13:43:59.268737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.137 [2024-10-01 13:43:59.268770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.137 [2024-10-01 13:43:59.268791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.137 [2024-10-01 13:43:59.268806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.137 [2024-10-01 13:43:59.268821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.137 [2024-10-01 13:43:59.268853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.137 [2024-10-01 13:43:59.268873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.137 [2024-10-01 13:43:59.268888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.137 [2024-10-01 13:43:59.268902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.137 [2024-10-01 13:43:59.268932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.137 [2024-10-01 13:43:59.278464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.137 [2024-10-01 13:43:59.278597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.137 [2024-10-01 13:43:59.278631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.137 [2024-10-01 13:43:59.278650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.137 [2024-10-01 13:43:59.278698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.137 [2024-10-01 13:43:59.278741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.137 [2024-10-01 13:43:59.278773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.137 [2024-10-01 13:43:59.278792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.137 [2024-10-01 13:43:59.278806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.137 [2024-10-01 13:43:59.278836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.137 [2024-10-01 13:43:59.278897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.137 [2024-10-01 13:43:59.278925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.137 [2024-10-01 13:43:59.278943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.137 [2024-10-01 13:43:59.278976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.137 [2024-10-01 13:43:59.279027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.137 [2024-10-01 13:43:59.279047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.137 [2024-10-01 13:43:59.279062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.137 [2024-10-01 13:43:59.280307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.137 [2024-10-01 13:43:59.288902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.137 [2024-10-01 13:43:59.289005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.137 [2024-10-01 13:43:59.289093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.137 [2024-10-01 13:43:59.289124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.137 [2024-10-01 13:43:59.289142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.137 [2024-10-01 13:43:59.290114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.137 [2024-10-01 13:43:59.290158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.137 [2024-10-01 13:43:59.290179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.137 [2024-10-01 13:43:59.290199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.137 [2024-10-01 13:43:59.290822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.137 [2024-10-01 13:43:59.290865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.137 [2024-10-01 13:43:59.290883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.137 [2024-10-01 13:43:59.290898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.137 [2024-10-01 13:43:59.291000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.137 [2024-10-01 13:43:59.291026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.137 [2024-10-01 13:43:59.291042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.137 [2024-10-01 13:43:59.291059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.137 [2024-10-01 13:43:59.291091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.137 [2024-10-01 13:43:59.299184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.137 [2024-10-01 13:43:59.299259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.137 [2024-10-01 13:43:59.299342] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.137 [2024-10-01 13:43:59.299372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.137 [2024-10-01 13:43:59.299390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.137 [2024-10-01 13:43:59.299457] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.137 [2024-10-01 13:43:59.299485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.137 [2024-10-01 13:43:59.299502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.137 [2024-10-01 13:43:59.299552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.137 [2024-10-01 13:43:59.300324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.137 [2024-10-01 13:43:59.300366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.137 [2024-10-01 13:43:59.300385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.137 [2024-10-01 13:43:59.300400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.137 [2024-10-01 13:43:59.300592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.137 [2024-10-01 13:43:59.300620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.137 [2024-10-01 13:43:59.300635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.137 [2024-10-01 13:43:59.300650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.137 [2024-10-01 13:43:59.300691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.137 [2024-10-01 13:43:59.309386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.137 [2024-10-01 13:43:59.309463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.137 [2024-10-01 13:43:59.309562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.137 [2024-10-01 13:43:59.309595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.137 [2024-10-01 13:43:59.309613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.137 [2024-10-01 13:43:59.309683] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.137 [2024-10-01 13:43:59.309711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.138 [2024-10-01 13:43:59.309731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.138 [2024-10-01 13:43:59.309750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.138 [2024-10-01 13:43:59.309782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.138 [2024-10-01 13:43:59.309804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.138 [2024-10-01 13:43:59.309818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.138 [2024-10-01 13:43:59.309832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.138 [2024-10-01 13:43:59.309864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.138 [2024-10-01 13:43:59.309884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.138 [2024-10-01 13:43:59.309898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.138 [2024-10-01 13:43:59.309912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.138 [2024-10-01 13:43:59.309941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.138 [2024-10-01 13:43:59.319488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.138 [2024-10-01 13:43:59.319618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.138 [2024-10-01 13:43:59.319651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.138 [2024-10-01 13:43:59.319688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.138 [2024-10-01 13:43:59.319738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.138 [2024-10-01 13:43:59.319780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.138 [2024-10-01 13:43:59.319812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.138 [2024-10-01 13:43:59.319830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.138 [2024-10-01 13:43:59.319844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.138 [2024-10-01 13:43:59.319886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.138 [2024-10-01 13:43:59.319952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.138 [2024-10-01 13:43:59.319981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.138 [2024-10-01 13:43:59.319998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.138 [2024-10-01 13:43:59.320032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.138 [2024-10-01 13:43:59.320065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.138 [2024-10-01 13:43:59.320083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.138 [2024-10-01 13:43:59.320097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.138 [2024-10-01 13:43:59.320128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.138 [2024-10-01 13:43:59.329599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.138 [2024-10-01 13:43:59.329719] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.138 [2024-10-01 13:43:59.329753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.138 [2024-10-01 13:43:59.329772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.138 [2024-10-01 13:43:59.329805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.138 [2024-10-01 13:43:59.329838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.138 [2024-10-01 13:43:59.329855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.138 [2024-10-01 13:43:59.329869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.138 [2024-10-01 13:43:59.329911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.138 [2024-10-01 13:43:59.329953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.138 [2024-10-01 13:43:59.330038] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.138 [2024-10-01 13:43:59.330068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.138 [2024-10-01 13:43:59.330086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.138 [2024-10-01 13:43:59.330118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.138 [2024-10-01 13:43:59.330150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.138 [2024-10-01 13:43:59.330183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.138 [2024-10-01 13:43:59.330199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.138 [2024-10-01 13:43:59.330232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.138 [2024-10-01 13:43:59.339694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.138 [2024-10-01 13:43:59.339814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.138 [2024-10-01 13:43:59.339847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.138 [2024-10-01 13:43:59.339866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.138 [2024-10-01 13:43:59.340649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.138 [2024-10-01 13:43:59.340866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.138 [2024-10-01 13:43:59.340913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.138 [2024-10-01 13:43:59.340931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.138 [2024-10-01 13:43:59.340976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.138 [2024-10-01 13:43:59.341004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.138 [2024-10-01 13:43:59.341089] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.138 [2024-10-01 13:43:59.341122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.138 [2024-10-01 13:43:59.341145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.138 [2024-10-01 13:43:59.341178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.138 [2024-10-01 13:43:59.341210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.138 [2024-10-01 13:43:59.341228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.138 [2024-10-01 13:43:59.341242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.138 [2024-10-01 13:43:59.341273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.138 [2024-10-01 13:43:59.351659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.138 [2024-10-01 13:43:59.352375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.138 [2024-10-01 13:43:59.352422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.138 [2024-10-01 13:43:59.352444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.138 [2024-10-01 13:43:59.352565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.138 [2024-10-01 13:43:59.352609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.138 [2024-10-01 13:43:59.352688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.138 [2024-10-01 13:43:59.352718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.138 [2024-10-01 13:43:59.352736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.138 [2024-10-01 13:43:59.352771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.138 [2024-10-01 13:43:59.352787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.138 [2024-10-01 13:43:59.352801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.138 [2024-10-01 13:43:59.353067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.138 [2024-10-01 13:43:59.353098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.138 [2024-10-01 13:43:59.353243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.138 [2024-10-01 13:43:59.353269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.138 [2024-10-01 13:43:59.353284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.138 [2024-10-01 13:43:59.353396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.138 [2024-10-01 13:43:59.362447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.138 [2024-10-01 13:43:59.362588] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.138 [2024-10-01 13:43:59.362623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.138 [2024-10-01 13:43:59.362641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.138 [2024-10-01 13:43:59.362676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.138 [2024-10-01 13:43:59.362723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.138 [2024-10-01 13:43:59.362746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.138 [2024-10-01 13:43:59.362761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.138 [2024-10-01 13:43:59.362794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.138 [2024-10-01 13:43:59.362819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.139 [2024-10-01 13:43:59.362895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.139 [2024-10-01 13:43:59.362925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.139 [2024-10-01 13:43:59.362943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.139 [2024-10-01 13:43:59.362975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.139 [2024-10-01 13:43:59.363007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.139 [2024-10-01 13:43:59.363025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.139 [2024-10-01 13:43:59.363039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.139 [2024-10-01 13:43:59.363070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.139 [2024-10-01 13:43:59.373648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.139 [2024-10-01 13:43:59.373736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.139 [2024-10-01 13:43:59.373821] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.139 [2024-10-01 13:43:59.373851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.139 [2024-10-01 13:43:59.373887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.139 [2024-10-01 13:43:59.373961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.139 [2024-10-01 13:43:59.373990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.139 [2024-10-01 13:43:59.374007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.139 [2024-10-01 13:43:59.374025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.139 [2024-10-01 13:43:59.374059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.139 [2024-10-01 13:43:59.374080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.139 [2024-10-01 13:43:59.374094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.139 [2024-10-01 13:43:59.374108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.139 [2024-10-01 13:43:59.374140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.139 [2024-10-01 13:43:59.374160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.139 [2024-10-01 13:43:59.374174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.139 [2024-10-01 13:43:59.374188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.139 [2024-10-01 13:43:59.374218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.139 [2024-10-01 13:43:59.383821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.139 [2024-10-01 13:43:59.383906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.139 [2024-10-01 13:43:59.383989] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.139 [2024-10-01 13:43:59.384020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.139 [2024-10-01 13:43:59.384039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.139 [2024-10-01 13:43:59.385008] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.139 [2024-10-01 13:43:59.385053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.139 [2024-10-01 13:43:59.385074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.139 [2024-10-01 13:43:59.385093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.139 [2024-10-01 13:43:59.385301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.139 [2024-10-01 13:43:59.385331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.139 [2024-10-01 13:43:59.385347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.139 [2024-10-01 13:43:59.385361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.139 [2024-10-01 13:43:59.385439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.139 [2024-10-01 13:43:59.385461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.139 [2024-10-01 13:43:59.385476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.139 [2024-10-01 13:43:59.385508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.139 [2024-10-01 13:43:59.386751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.139 [2024-10-01 13:43:59.394697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.139 [2024-10-01 13:43:59.394748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.139 [2024-10-01 13:43:59.394847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.139 [2024-10-01 13:43:59.394879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.139 [2024-10-01 13:43:59.394896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.139 [2024-10-01 13:43:59.394946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.139 [2024-10-01 13:43:59.394972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.139 [2024-10-01 13:43:59.394989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.139 [2024-10-01 13:43:59.395022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.139 [2024-10-01 13:43:59.395046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.139 [2024-10-01 13:43:59.395072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.139 [2024-10-01 13:43:59.395090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.139 [2024-10-01 13:43:59.395104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.139 [2024-10-01 13:43:59.395122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.139 [2024-10-01 13:43:59.395137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.139 [2024-10-01 13:43:59.395150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.139 [2024-10-01 13:43:59.395412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.139 [2024-10-01 13:43:59.395439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.139 [2024-10-01 13:43:59.404875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.139 [2024-10-01 13:43:59.404925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.139 [2024-10-01 13:43:59.405023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.139 [2024-10-01 13:43:59.405055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.139 [2024-10-01 13:43:59.405073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.139 [2024-10-01 13:43:59.405123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.139 [2024-10-01 13:43:59.405148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.139 [2024-10-01 13:43:59.405164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.139 [2024-10-01 13:43:59.405198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.139 [2024-10-01 13:43:59.405221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.139 [2024-10-01 13:43:59.405270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.139 [2024-10-01 13:43:59.405290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.139 [2024-10-01 13:43:59.405304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.139 [2024-10-01 13:43:59.405321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.139 [2024-10-01 13:43:59.405337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.139 [2024-10-01 13:43:59.405350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.139 [2024-10-01 13:43:59.405382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.139 [2024-10-01 13:43:59.405402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.139 [2024-10-01 13:43:59.416099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.140 [2024-10-01 13:43:59.416151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.140 [2024-10-01 13:43:59.416250] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.140 [2024-10-01 13:43:59.416283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.140 [2024-10-01 13:43:59.416320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.140 [2024-10-01 13:43:59.416386] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.140 [2024-10-01 13:43:59.416413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.140 [2024-10-01 13:43:59.416430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.140 [2024-10-01 13:43:59.416475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.140 [2024-10-01 13:43:59.416500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.140 [2024-10-01 13:43:59.416529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.140 [2024-10-01 13:43:59.416564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.140 [2024-10-01 13:43:59.416579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.140 [2024-10-01 13:43:59.416597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.140 [2024-10-01 13:43:59.416619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.140 [2024-10-01 13:43:59.416654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.140 [2024-10-01 13:43:59.416697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.140 [2024-10-01 13:43:59.416720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.140 [2024-10-01 13:43:59.426233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.140 [2024-10-01 13:43:59.426318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.140 [2024-10-01 13:43:59.426404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.140 [2024-10-01 13:43:59.426434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.140 [2024-10-01 13:43:59.426453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.140 [2024-10-01 13:43:59.426576] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.140 [2024-10-01 13:43:59.426609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.140 [2024-10-01 13:43:59.426627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.140 [2024-10-01 13:43:59.426647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.140 [2024-10-01 13:43:59.426683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.140 [2024-10-01 13:43:59.426704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.140 [2024-10-01 13:43:59.426719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.140 [2024-10-01 13:43:59.426733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.140 [2024-10-01 13:43:59.426766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.140 [2024-10-01 13:43:59.426787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.140 [2024-10-01 13:43:59.426801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.140 [2024-10-01 13:43:59.426815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.140 [2024-10-01 13:43:59.428078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.140 [2024-10-01 13:43:59.436340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.140 [2024-10-01 13:43:59.436460] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.140 [2024-10-01 13:43:59.436492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.140 [2024-10-01 13:43:59.436511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.140 [2024-10-01 13:43:59.437340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.140 [2024-10-01 13:43:59.437570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.140 [2024-10-01 13:43:59.437613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.140 [2024-10-01 13:43:59.437632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.140 [2024-10-01 13:43:59.437647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.140 [2024-10-01 13:43:59.438646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.140 [2024-10-01 13:43:59.438738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.140 [2024-10-01 13:43:59.438768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.140 [2024-10-01 13:43:59.438786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.140 [2024-10-01 13:43:59.439395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.140 [2024-10-01 13:43:59.439508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.140 [2024-10-01 13:43:59.439547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.140 [2024-10-01 13:43:59.439566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.140 [2024-10-01 13:43:59.439618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.140 [2024-10-01 13:43:59.446435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.140 [2024-10-01 13:43:59.446570] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.140 [2024-10-01 13:43:59.446605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.140 [2024-10-01 13:43:59.446624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.140 [2024-10-01 13:43:59.446659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.140 [2024-10-01 13:43:59.446692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.140 [2024-10-01 13:43:59.446711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.140 [2024-10-01 13:43:59.446725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.140 [2024-10-01 13:43:59.446757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.140 [2024-10-01 13:43:59.449258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.140 [2024-10-01 13:43:59.449375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.140 [2024-10-01 13:43:59.449408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.140 [2024-10-01 13:43:59.449427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.140 [2024-10-01 13:43:59.449477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.140 [2024-10-01 13:43:59.449515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.140 [2024-10-01 13:43:59.449548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.140 [2024-10-01 13:43:59.449567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.140 [2024-10-01 13:43:59.449601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.140 [2024-10-01 13:43:59.456548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.140 [2024-10-01 13:43:59.456664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.140 [2024-10-01 13:43:59.456696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.140 [2024-10-01 13:43:59.456715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.140 [2024-10-01 13:43:59.457653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.140 [2024-10-01 13:43:59.457882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.140 [2024-10-01 13:43:59.457931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.140 [2024-10-01 13:43:59.457950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.140 [2024-10-01 13:43:59.458030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.140 [2024-10-01 13:43:59.460413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.140 [2024-10-01 13:43:59.460561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.140 [2024-10-01 13:43:59.460595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.140 [2024-10-01 13:43:59.460629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.140 [2024-10-01 13:43:59.460666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.140 [2024-10-01 13:43:59.460699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.140 [2024-10-01 13:43:59.460718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.140 [2024-10-01 13:43:59.460732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.140 [2024-10-01 13:43:59.460764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.140 [2024-10-01 13:43:59.467323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.140 [2024-10-01 13:43:59.467442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.140 [2024-10-01 13:43:59.467476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.141 [2024-10-01 13:43:59.467495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.141 [2024-10-01 13:43:59.467529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.141 [2024-10-01 13:43:59.467580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.141 [2024-10-01 13:43:59.467600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.141 [2024-10-01 13:43:59.467614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.141 [2024-10-01 13:43:59.467646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.141 [2024-10-01 13:43:59.470609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.141 [2024-10-01 13:43:59.470723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.141 [2024-10-01 13:43:59.470755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.141 [2024-10-01 13:43:59.470774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.141 [2024-10-01 13:43:59.470823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.141 [2024-10-01 13:43:59.471754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.141 [2024-10-01 13:43:59.471794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.141 [2024-10-01 13:43:59.471813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.141 [2024-10-01 13:43:59.472020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.141 [2024-10-01 13:43:59.477488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.141 [2024-10-01 13:43:59.477619] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.141 [2024-10-01 13:43:59.477652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.141 [2024-10-01 13:43:59.477670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.141 [2024-10-01 13:43:59.477703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.141 [2024-10-01 13:43:59.477736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.141 [2024-10-01 13:43:59.477771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.141 [2024-10-01 13:43:59.477787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.141 [2024-10-01 13:43:59.477821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.141 [2024-10-01 13:43:59.481438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.141 [2024-10-01 13:43:59.481568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.141 [2024-10-01 13:43:59.481602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.141 [2024-10-01 13:43:59.481621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.141 [2024-10-01 13:43:59.481655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.141 [2024-10-01 13:43:59.481690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.141 [2024-10-01 13:43:59.481708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.141 [2024-10-01 13:43:59.481723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.141 [2024-10-01 13:43:59.481755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.141 [2024-10-01 13:43:59.488647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.141 [2024-10-01 13:43:59.488764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.141 [2024-10-01 13:43:59.488798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.141 [2024-10-01 13:43:59.488817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.141 [2024-10-01 13:43:59.488850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.141 [2024-10-01 13:43:59.488882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.141 [2024-10-01 13:43:59.488901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.141 [2024-10-01 13:43:59.488915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.141 [2024-10-01 13:43:59.488947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.141 [2024-10-01 13:43:59.491598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.141 [2024-10-01 13:43:59.491720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.141 [2024-10-01 13:43:59.491753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.141 [2024-10-01 13:43:59.491771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.141 [2024-10-01 13:43:59.491804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.141 [2024-10-01 13:43:59.491837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.141 [2024-10-01 13:43:59.491855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.141 [2024-10-01 13:43:59.491869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.141 [2024-10-01 13:43:59.491915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.141 [2024-10-01 13:43:59.498859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.141 [2024-10-01 13:43:59.498978] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.141 [2024-10-01 13:43:59.499011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.141 [2024-10-01 13:43:59.499030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.141 [2024-10-01 13:43:59.499064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.141 [2024-10-01 13:43:59.499097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.141 [2024-10-01 13:43:59.499115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.141 [2024-10-01 13:43:59.499129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.141 [2024-10-01 13:43:59.500068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.141 [2024-10-01 13:43:59.502776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.141 [2024-10-01 13:43:59.502903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.141 [2024-10-01 13:43:59.502942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.141 [2024-10-01 13:43:59.502962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.141 [2024-10-01 13:43:59.502996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.141 [2024-10-01 13:43:59.503029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.141 [2024-10-01 13:43:59.503047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.141 [2024-10-01 13:43:59.503062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.141 [2024-10-01 13:43:59.503094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.141 [2024-10-01 13:43:59.509854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.141 [2024-10-01 13:43:59.509973] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.141 [2024-10-01 13:43:59.510005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.141 [2024-10-01 13:43:59.510024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.141 [2024-10-01 13:43:59.510057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.141 [2024-10-01 13:43:59.510090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.141 [2024-10-01 13:43:59.510108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.141 [2024-10-01 13:43:59.510122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.141 [2024-10-01 13:43:59.510154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.141 [2024-10-01 13:43:59.513099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.141 [2024-10-01 13:43:59.513213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.141 [2024-10-01 13:43:59.513246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.141 [2024-10-01 13:43:59.513264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.141 [2024-10-01 13:43:59.513334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.141 [2024-10-01 13:43:59.514268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.141 [2024-10-01 13:43:59.514308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.141 [2024-10-01 13:43:59.514326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.141 [2024-10-01 13:43:59.514552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.141 [2024-10-01 13:43:59.520035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.141 [2024-10-01 13:43:59.520168] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.141 [2024-10-01 13:43:59.520201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.141 [2024-10-01 13:43:59.520219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.142 [2024-10-01 13:43:59.520253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.142 [2024-10-01 13:43:59.520288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.142 [2024-10-01 13:43:59.520305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.142 [2024-10-01 13:43:59.520320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.142 [2024-10-01 13:43:59.520351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.142 [2024-10-01 13:43:59.524037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.142 [2024-10-01 13:43:59.524159] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.142 [2024-10-01 13:43:59.524192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.142 [2024-10-01 13:43:59.524210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.142 [2024-10-01 13:43:59.524244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.142 [2024-10-01 13:43:59.524276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.142 [2024-10-01 13:43:59.524295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.142 [2024-10-01 13:43:59.524309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.142 [2024-10-01 13:43:59.524340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.142 [2024-10-01 13:43:59.531248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.142 [2024-10-01 13:43:59.531378] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.142 [2024-10-01 13:43:59.531411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.142 [2024-10-01 13:43:59.531430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.142 [2024-10-01 13:43:59.531464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.142 [2024-10-01 13:43:59.531496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.142 [2024-10-01 13:43:59.531517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.142 [2024-10-01 13:43:59.531566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.142 [2024-10-01 13:43:59.531603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.142 [2024-10-01 13:43:59.534251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.142 [2024-10-01 13:43:59.534377] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.142 [2024-10-01 13:43:59.534410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.142 [2024-10-01 13:43:59.534429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.142 [2024-10-01 13:43:59.534462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.142 [2024-10-01 13:43:59.534495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.142 [2024-10-01 13:43:59.534513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.142 [2024-10-01 13:43:59.534527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.142 [2024-10-01 13:43:59.534579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.142 [2024-10-01 13:43:59.541514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.142 [2024-10-01 13:43:59.541658] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.142 [2024-10-01 13:43:59.541705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.142 [2024-10-01 13:43:59.541735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.142 [2024-10-01 13:43:59.541776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.142 [2024-10-01 13:43:59.542722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.142 [2024-10-01 13:43:59.542760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.142 [2024-10-01 13:43:59.542779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.142 [2024-10-01 13:43:59.543003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.142 [2024-10-01 13:43:59.545505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.142 [2024-10-01 13:43:59.545643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.142 [2024-10-01 13:43:59.545676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.142 [2024-10-01 13:43:59.545694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.142 [2024-10-01 13:43:59.545729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.142 [2024-10-01 13:43:59.545761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.142 [2024-10-01 13:43:59.545780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.142 [2024-10-01 13:43:59.545794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.142 [2024-10-01 13:43:59.545836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.142 [2024-10-01 13:43:59.552515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.142 [2024-10-01 13:43:59.552670] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.142 [2024-10-01 13:43:59.552702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.142 [2024-10-01 13:43:59.552721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.142 [2024-10-01 13:43:59.552765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.142 [2024-10-01 13:43:59.552798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.142 [2024-10-01 13:43:59.552816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.142 [2024-10-01 13:43:59.552830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.142 [2024-10-01 13:43:59.552862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.142 [2024-10-01 13:43:59.555761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.142 [2024-10-01 13:43:59.555886] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.142 [2024-10-01 13:43:59.555920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.142 [2024-10-01 13:43:59.555938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.142 [2024-10-01 13:43:59.555990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.142 [2024-10-01 13:43:59.556925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.142 [2024-10-01 13:43:59.556964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.142 [2024-10-01 13:43:59.556983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.142 [2024-10-01 13:43:59.557185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.142 [2024-10-01 13:43:59.562681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.142 [2024-10-01 13:43:59.562838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.142 [2024-10-01 13:43:59.562872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.142 [2024-10-01 13:43:59.562891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.142 [2024-10-01 13:43:59.562926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.142 [2024-10-01 13:43:59.562959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.142 [2024-10-01 13:43:59.562977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.142 [2024-10-01 13:43:59.562992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.142 [2024-10-01 13:43:59.563024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.142 [2024-10-01 13:43:59.566656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.142 [2024-10-01 13:43:59.566774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.142 [2024-10-01 13:43:59.566807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.142 [2024-10-01 13:43:59.566826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.142 [2024-10-01 13:43:59.566859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.142 [2024-10-01 13:43:59.566912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.142 [2024-10-01 13:43:59.566932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.142 [2024-10-01 13:43:59.566947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.142 [2024-10-01 13:43:59.566979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.142 [2024-10-01 13:43:59.573916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.142 [2024-10-01 13:43:59.574042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.142 [2024-10-01 13:43:59.574076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.142 [2024-10-01 13:43:59.574095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.142 [2024-10-01 13:43:59.574129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.142 [2024-10-01 13:43:59.574162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.143 [2024-10-01 13:43:59.574180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.143 [2024-10-01 13:43:59.574194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.143 [2024-10-01 13:43:59.574226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.143 [2024-10-01 13:43:59.576929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.143 [2024-10-01 13:43:59.577051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.143 [2024-10-01 13:43:59.577084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.143 [2024-10-01 13:43:59.577102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.143 [2024-10-01 13:43:59.577137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.143 [2024-10-01 13:43:59.577169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.143 [2024-10-01 13:43:59.577188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.143 [2024-10-01 13:43:59.577202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.143 [2024-10-01 13:43:59.577233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.143 [2024-10-01 13:43:59.584311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.143 [2024-10-01 13:43:59.584441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.143 [2024-10-01 13:43:59.584475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.143 [2024-10-01 13:43:59.584494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.143 [2024-10-01 13:43:59.585436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.143 [2024-10-01 13:43:59.585679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.143 [2024-10-01 13:43:59.585718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.143 [2024-10-01 13:43:59.585737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.143 [2024-10-01 13:43:59.585835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.143 [2024-10-01 13:43:59.588232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.143 [2024-10-01 13:43:59.588388] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.143 [2024-10-01 13:43:59.588425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.143 [2024-10-01 13:43:59.588444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.143 [2024-10-01 13:43:59.588479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.143 [2024-10-01 13:43:59.588512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.143 [2024-10-01 13:43:59.588532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.143 [2024-10-01 13:43:59.588564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.143 [2024-10-01 13:43:59.588599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.143 [2024-10-01 13:43:59.595334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.143 [2024-10-01 13:43:59.595471] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.143 [2024-10-01 13:43:59.595505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.143 [2024-10-01 13:43:59.595524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.143 [2024-10-01 13:43:59.595577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.143 [2024-10-01 13:43:59.595613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.143 [2024-10-01 13:43:59.595631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.143 [2024-10-01 13:43:59.595645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.143 [2024-10-01 13:43:59.595678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.143 [2024-10-01 13:43:59.598847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.143 [2024-10-01 13:43:59.598968] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.143 [2024-10-01 13:43:59.599001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.143 [2024-10-01 13:43:59.599020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.143 [2024-10-01 13:43:59.599054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.143 [2024-10-01 13:43:59.599086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.143 [2024-10-01 13:43:59.599105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.143 [2024-10-01 13:43:59.599119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.143 [2024-10-01 13:43:59.599151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.143 [2024-10-01 13:43:59.606060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.143 [2024-10-01 13:43:59.606180] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.143 [2024-10-01 13:43:59.606214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.143 [2024-10-01 13:43:59.606251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.143 [2024-10-01 13:43:59.606287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.143 [2024-10-01 13:43:59.606322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.143 [2024-10-01 13:43:59.606339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.143 [2024-10-01 13:43:59.606354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.143 [2024-10-01 13:43:59.606395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.143 [2024-10-01 13:43:59.610163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.143 [2024-10-01 13:43:59.610284] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.143 [2024-10-01 13:43:59.610318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.143 [2024-10-01 13:43:59.610336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.143 [2024-10-01 13:43:59.610369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.143 [2024-10-01 13:43:59.610412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.143 [2024-10-01 13:43:59.610431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.143 [2024-10-01 13:43:59.610445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.143 [2024-10-01 13:43:59.610478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.143 [2024-10-01 13:43:59.617422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.143 [2024-10-01 13:43:59.617564] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.143 [2024-10-01 13:43:59.617598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.143 [2024-10-01 13:43:59.617617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.143 [2024-10-01 13:43:59.617652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.143 [2024-10-01 13:43:59.617686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.143 [2024-10-01 13:43:59.617703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.143 [2024-10-01 13:43:59.617718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.143 [2024-10-01 13:43:59.617755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.143 [2024-10-01 13:43:59.620660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.143 [2024-10-01 13:43:59.620789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.143 [2024-10-01 13:43:59.620824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.143 [2024-10-01 13:43:59.620843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.143 [2024-10-01 13:43:59.620877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.143 [2024-10-01 13:43:59.620912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.144 [2024-10-01 13:43:59.620949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.144 [2024-10-01 13:43:59.620965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.144 [2024-10-01 13:43:59.620999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.144 [2024-10-01 13:43:59.627814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.144 [2024-10-01 13:43:59.627966] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.144 [2024-10-01 13:43:59.628001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.144 [2024-10-01 13:43:59.628021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.144 [2024-10-01 13:43:59.628055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.144 [2024-10-01 13:43:59.628986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.144 [2024-10-01 13:43:59.629028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.144 [2024-10-01 13:43:59.629047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.144 [2024-10-01 13:43:59.629239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.144 [2024-10-01 13:43:59.631758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.144 [2024-10-01 13:43:59.631890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.144 [2024-10-01 13:43:59.631925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.144 [2024-10-01 13:43:59.631944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.144 [2024-10-01 13:43:59.631993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.144 [2024-10-01 13:43:59.632031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.144 [2024-10-01 13:43:59.632061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.144 [2024-10-01 13:43:59.632075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.144 [2024-10-01 13:43:59.632107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.144 [2024-10-01 13:43:59.638710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.144 [2024-10-01 13:43:59.638830] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.144 [2024-10-01 13:43:59.638864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.144 [2024-10-01 13:43:59.638883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.144 [2024-10-01 13:43:59.638916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.144 [2024-10-01 13:43:59.638949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.144 [2024-10-01 13:43:59.638967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.144 [2024-10-01 13:43:59.638981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.144 [2024-10-01 13:43:59.639013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.144 [2024-10-01 13:43:59.641986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.144 [2024-10-01 13:43:59.642104] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.144 [2024-10-01 13:43:59.642137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.144 [2024-10-01 13:43:59.642155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.144 [2024-10-01 13:43:59.642188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.144 [2024-10-01 13:43:59.642227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.144 [2024-10-01 13:43:59.642245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.144 [2024-10-01 13:43:59.642262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.144 [2024-10-01 13:43:59.643197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.144 [2024-10-01 13:43:59.648893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.144 [2024-10-01 13:43:59.649012] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.144 [2024-10-01 13:43:59.649045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.144 [2024-10-01 13:43:59.649064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.144 [2024-10-01 13:43:59.649097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.144 [2024-10-01 13:43:59.649130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.144 [2024-10-01 13:43:59.649148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.144 [2024-10-01 13:43:59.649163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.144 [2024-10-01 13:43:59.649196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.144 [2024-10-01 13:43:59.652863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.144 [2024-10-01 13:43:59.653000] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.144 [2024-10-01 13:43:59.653034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.144 [2024-10-01 13:43:59.653052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.144 [2024-10-01 13:43:59.653084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.144 [2024-10-01 13:43:59.653117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.144 [2024-10-01 13:43:59.653135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.144 [2024-10-01 13:43:59.653150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.144 [2024-10-01 13:43:59.653182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.144 [2024-10-01 13:43:59.660006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.144 [2024-10-01 13:43:59.660136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.144 [2024-10-01 13:43:59.660169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.144 [2024-10-01 13:43:59.660187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.144 [2024-10-01 13:43:59.660237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.144 [2024-10-01 13:43:59.660272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.144 [2024-10-01 13:43:59.660290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.144 [2024-10-01 13:43:59.660305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.144 [2024-10-01 13:43:59.660337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.144 [2024-10-01 13:43:59.663027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.144 [2024-10-01 13:43:59.663142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.144 [2024-10-01 13:43:59.663174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.144 [2024-10-01 13:43:59.663193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.144 [2024-10-01 13:43:59.663226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.144 [2024-10-01 13:43:59.663258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.144 [2024-10-01 13:43:59.663276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.144 [2024-10-01 13:43:59.663291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.144 [2024-10-01 13:43:59.663323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.144 8440.57 IOPS, 32.97 MiB/s [2024-10-01 13:43:59.670245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.144 [2024-10-01 13:43:59.670362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.144 [2024-10-01 13:43:59.670395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.144 [2024-10-01 13:43:59.670414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.144 [2024-10-01 13:43:59.670448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.144 [2024-10-01 13:43:59.671391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.144 [2024-10-01 13:43:59.671431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.144 [2024-10-01 13:43:59.671449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.144 [2024-10-01 13:43:59.671660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.144 [2024-10-01 13:43:59.674317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.144 [2024-10-01 13:43:59.674444] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.144 [2024-10-01 13:43:59.674479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.144 [2024-10-01 13:43:59.674499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.144 [2024-10-01 13:43:59.674552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.144 [2024-10-01 13:43:59.674593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.144 [2024-10-01 13:43:59.674612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.144 [2024-10-01 13:43:59.674643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.144 [2024-10-01 13:43:59.674678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.144 [2024-10-01 13:43:59.681078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.144 [2024-10-01 13:43:59.681212] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.144 [2024-10-01 13:43:59.681246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.144 [2024-10-01 13:43:59.681266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.144 [2024-10-01 13:43:59.681300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.144 [2024-10-01 13:43:59.681333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.144 [2024-10-01 13:43:59.681351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.145 [2024-10-01 13:43:59.681365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.145 [2024-10-01 13:43:59.681399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.145 [2024-10-01 13:43:59.684420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.145 [2024-10-01 13:43:59.684554] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.145 [2024-10-01 13:43:59.684588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.145 [2024-10-01 13:43:59.684608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.145 [2024-10-01 13:43:59.685531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.145 [2024-10-01 13:43:59.685784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.145 [2024-10-01 13:43:59.685828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.145 [2024-10-01 13:43:59.685847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.145 [2024-10-01 13:43:59.685928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.145 [2024-10-01 13:43:59.691221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.145 [2024-10-01 13:43:59.691343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.145 [2024-10-01 13:43:59.691375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.145 [2024-10-01 13:43:59.691394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.145 [2024-10-01 13:43:59.691428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.145 [2024-10-01 13:43:59.691460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.145 [2024-10-01 13:43:59.691478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.145 [2024-10-01 13:43:59.691493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.145 [2024-10-01 13:43:59.691526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.145 [2024-10-01 13:43:59.695140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.145 [2024-10-01 13:43:59.695280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.145 [2024-10-01 13:43:59.695315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.145 [2024-10-01 13:43:59.695334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.145 [2024-10-01 13:43:59.695385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.145 [2024-10-01 13:43:59.695423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.145 [2024-10-01 13:43:59.695442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.145 [2024-10-01 13:43:59.695456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.145 [2024-10-01 13:43:59.695488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.145 [2024-10-01 13:43:59.702326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.145 [2024-10-01 13:43:59.702447] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.145 [2024-10-01 13:43:59.702480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.145 [2024-10-01 13:43:59.702499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.145 [2024-10-01 13:43:59.702532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.145 [2024-10-01 13:43:59.702585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.145 [2024-10-01 13:43:59.702603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.145 [2024-10-01 13:43:59.702618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.145 [2024-10-01 13:43:59.702651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.145 [2024-10-01 13:43:59.705313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.145 [2024-10-01 13:43:59.705428] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.145 [2024-10-01 13:43:59.705460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.145 [2024-10-01 13:43:59.705479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.145 [2024-10-01 13:43:59.705512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.145 [2024-10-01 13:43:59.705562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.145 [2024-10-01 13:43:59.705583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.145 [2024-10-01 13:43:59.705598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.145 [2024-10-01 13:43:59.705630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.145 [2024-10-01 13:43:59.712491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.145 [2024-10-01 13:43:59.712621] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.145 [2024-10-01 13:43:59.712655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.145 [2024-10-01 13:43:59.712674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.145 [2024-10-01 13:43:59.712742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.145 [2024-10-01 13:43:59.713680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.145 [2024-10-01 13:43:59.713719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.145 [2024-10-01 13:43:59.713738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.145 [2024-10-01 13:43:59.713940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.145 [2024-10-01 13:43:59.716442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.145 [2024-10-01 13:43:59.716579] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.145 [2024-10-01 13:43:59.716613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.145 [2024-10-01 13:43:59.716631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.145 [2024-10-01 13:43:59.716666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.145 [2024-10-01 13:43:59.716718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.145 [2024-10-01 13:43:59.716741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.145 [2024-10-01 13:43:59.716756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.145 [2024-10-01 13:43:59.716789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.145 [2024-10-01 13:43:59.723371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.145 [2024-10-01 13:43:59.723488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.145 [2024-10-01 13:43:59.723521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.145 [2024-10-01 13:43:59.723554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.145 [2024-10-01 13:43:59.723592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.145 [2024-10-01 13:43:59.723625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.145 [2024-10-01 13:43:59.723643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.145 [2024-10-01 13:43:59.723657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.145 [2024-10-01 13:43:59.723689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.145 [2024-10-01 13:43:59.726651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.145 [2024-10-01 13:43:59.726768] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.145 [2024-10-01 13:43:59.726800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.145 [2024-10-01 13:43:59.726819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.145 [2024-10-01 13:43:59.726852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.145 [2024-10-01 13:43:59.726884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.145 [2024-10-01 13:43:59.726902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.145 [2024-10-01 13:43:59.726931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.145 [2024-10-01 13:43:59.727863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.145 [2024-10-01 13:43:59.733506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.145 [2024-10-01 13:43:59.733652] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.145 [2024-10-01 13:43:59.733686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.145 [2024-10-01 13:43:59.733705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.145 [2024-10-01 13:43:59.733738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.145 [2024-10-01 13:43:59.733770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.145 [2024-10-01 13:43:59.733788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.145 [2024-10-01 13:43:59.733802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.145 [2024-10-01 13:43:59.733834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.145 [2024-10-01 13:43:59.737487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.145 [2024-10-01 13:43:59.737621] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.145 [2024-10-01 13:43:59.737654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.145 [2024-10-01 13:43:59.737673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.145 [2024-10-01 13:43:59.737723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.145 [2024-10-01 13:43:59.737761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.145 [2024-10-01 13:43:59.737779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.145 [2024-10-01 13:43:59.737794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.145 [2024-10-01 13:43:59.737826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.146 [2024-10-01 13:43:59.744649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.146 [2024-10-01 13:43:59.744768] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.146 [2024-10-01 13:43:59.744801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.146 [2024-10-01 13:43:59.744820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.146 [2024-10-01 13:43:59.744854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.146 [2024-10-01 13:43:59.744886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.146 [2024-10-01 13:43:59.744903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.146 [2024-10-01 13:43:59.744918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.146 [2024-10-01 13:43:59.744950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.146 [2024-10-01 13:43:59.747648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.146 [2024-10-01 13:43:59.747771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.146 [2024-10-01 13:43:59.747822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.146 [2024-10-01 13:43:59.747843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.146 [2024-10-01 13:43:59.747889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.146 [2024-10-01 13:43:59.747925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.146 [2024-10-01 13:43:59.747943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.146 [2024-10-01 13:43:59.747957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.146 [2024-10-01 13:43:59.747988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.146 [2024-10-01 13:43:59.754872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.146 [2024-10-01 13:43:59.754993] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.146 [2024-10-01 13:43:59.755027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.146 [2024-10-01 13:43:59.755046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.146 [2024-10-01 13:43:59.755096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.146 [2024-10-01 13:43:59.756051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.146 [2024-10-01 13:43:59.756091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.146 [2024-10-01 13:43:59.756110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.146 [2024-10-01 13:43:59.756312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.146 [2024-10-01 13:43:59.758784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.146 [2024-10-01 13:43:59.758910] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.146 [2024-10-01 13:43:59.758942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.146 [2024-10-01 13:43:59.758960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.146 [2024-10-01 13:43:59.758995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.146 [2024-10-01 13:43:59.759044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.146 [2024-10-01 13:43:59.759067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.146 [2024-10-01 13:43:59.759082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.146 [2024-10-01 13:43:59.759114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.146 [2024-10-01 13:43:59.766159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.146 [2024-10-01 13:43:59.766278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.146 [2024-10-01 13:43:59.766312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.146 [2024-10-01 13:43:59.766330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.146 [2024-10-01 13:43:59.766364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.146 [2024-10-01 13:43:59.766421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.146 [2024-10-01 13:43:59.766440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.146 [2024-10-01 13:43:59.766455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.146 [2024-10-01 13:43:59.766487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.146 [2024-10-01 13:43:59.769769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.146 [2024-10-01 13:43:59.769885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.146 [2024-10-01 13:43:59.769918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.146 [2024-10-01 13:43:59.769937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.146 [2024-10-01 13:43:59.769970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.146 [2024-10-01 13:43:59.770003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.146 [2024-10-01 13:43:59.770021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.146 [2024-10-01 13:43:59.770036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.146 [2024-10-01 13:43:59.770977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.146 [2024-10-01 13:43:59.777804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.146 [2024-10-01 13:43:59.777951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.146 [2024-10-01 13:43:59.777985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.146 [2024-10-01 13:43:59.778004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.146 [2024-10-01 13:43:59.778038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.146 [2024-10-01 13:43:59.778071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.146 [2024-10-01 13:43:59.778088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.146 [2024-10-01 13:43:59.778102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.146 [2024-10-01 13:43:59.778134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.146 [2024-10-01 13:43:59.781492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.146 [2024-10-01 13:43:59.782196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.146 [2024-10-01 13:43:59.782242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.146 [2024-10-01 13:43:59.782264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.146 [2024-10-01 13:43:59.782368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.146 [2024-10-01 13:43:59.782409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.146 [2024-10-01 13:43:59.782428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.146 [2024-10-01 13:43:59.782442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.146 [2024-10-01 13:43:59.782476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.146 [2024-10-01 13:43:59.789470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.146 [2024-10-01 13:43:59.789614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.146 [2024-10-01 13:43:59.789649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.146 [2024-10-01 13:43:59.789667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.146 [2024-10-01 13:43:59.789702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.146 [2024-10-01 13:43:59.789735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.146 [2024-10-01 13:43:59.789753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.146 [2024-10-01 13:43:59.789767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.146 [2024-10-01 13:43:59.789799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.146 [2024-10-01 13:43:59.792488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.146 [2024-10-01 13:43:59.792626] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.146 [2024-10-01 13:43:59.792659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.146 [2024-10-01 13:43:59.792678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.146 [2024-10-01 13:43:59.792711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.146 [2024-10-01 13:43:59.792744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.146 [2024-10-01 13:43:59.792762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.146 [2024-10-01 13:43:59.792776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.146 [2024-10-01 13:43:59.792808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.146 [2024-10-01 13:43:59.799788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.146 [2024-10-01 13:43:59.799919] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.146 [2024-10-01 13:43:59.799954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.146 [2024-10-01 13:43:59.799982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.146 [2024-10-01 13:43:59.800016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.146 [2024-10-01 13:43:59.800049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.146 [2024-10-01 13:43:59.800067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.146 [2024-10-01 13:43:59.800081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.146 [2024-10-01 13:43:59.801007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.146 [2024-10-01 13:43:59.803772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.147 [2024-10-01 13:43:59.803907] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.147 [2024-10-01 13:43:59.803941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.147 [2024-10-01 13:43:59.803977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.147 [2024-10-01 13:43:59.804014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.147 [2024-10-01 13:43:59.804048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.147 [2024-10-01 13:43:59.804066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.147 [2024-10-01 13:43:59.804081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.147 [2024-10-01 13:43:59.804113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.147 [2024-10-01 13:43:59.810733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.147 [2024-10-01 13:43:59.810852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.147 [2024-10-01 13:43:59.810885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.147 [2024-10-01 13:43:59.810904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.147 [2024-10-01 13:43:59.810938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.147 [2024-10-01 13:43:59.810971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.147 [2024-10-01 13:43:59.810989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.147 [2024-10-01 13:43:59.811003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.147 [2024-10-01 13:43:59.811035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.147 [2024-10-01 13:43:59.813992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.147 [2024-10-01 13:43:59.814107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.147 [2024-10-01 13:43:59.814140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.147 [2024-10-01 13:43:59.814159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.147 [2024-10-01 13:43:59.814207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.147 [2024-10-01 13:43:59.815142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.147 [2024-10-01 13:43:59.815182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.147 [2024-10-01 13:43:59.815201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.147 [2024-10-01 13:43:59.815388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.147 [2024-10-01 13:43:59.820876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.147 [2024-10-01 13:43:59.820995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.147 [2024-10-01 13:43:59.821028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.147 [2024-10-01 13:43:59.821047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.147 [2024-10-01 13:43:59.821081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.147 [2024-10-01 13:43:59.821115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.147 [2024-10-01 13:43:59.821150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.147 [2024-10-01 13:43:59.821166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.147 [2024-10-01 13:43:59.821200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.147 [2024-10-01 13:43:59.824981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.147 [2024-10-01 13:43:59.825098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.147 [2024-10-01 13:43:59.825130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.147 [2024-10-01 13:43:59.825149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.147 [2024-10-01 13:43:59.825182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.147 [2024-10-01 13:43:59.825215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.147 [2024-10-01 13:43:59.825233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.147 [2024-10-01 13:43:59.825248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.147 [2024-10-01 13:43:59.825279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.147 [2024-10-01 13:43:59.832186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.147 [2024-10-01 13:43:59.832314] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.147 [2024-10-01 13:43:59.832348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.147 [2024-10-01 13:43:59.832367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.147 [2024-10-01 13:43:59.832400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.147 [2024-10-01 13:43:59.832433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.147 [2024-10-01 13:43:59.832451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.147 [2024-10-01 13:43:59.832465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.147 [2024-10-01 13:43:59.832497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.147 [2024-10-01 13:43:59.835151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.147 [2024-10-01 13:43:59.835273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.147 [2024-10-01 13:43:59.835305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.147 [2024-10-01 13:43:59.835323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.147 [2024-10-01 13:43:59.835357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.147 [2024-10-01 13:43:59.835389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.147 [2024-10-01 13:43:59.835408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.147 [2024-10-01 13:43:59.835422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.147 [2024-10-01 13:43:59.835453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.147 [2024-10-01 13:43:59.842343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.147 [2024-10-01 13:43:59.842482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.147 [2024-10-01 13:43:59.842516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.147 [2024-10-01 13:43:59.842548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.147 [2024-10-01 13:43:59.843467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.147 [2024-10-01 13:43:59.843723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.147 [2024-10-01 13:43:59.843763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.147 [2024-10-01 13:43:59.843782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.147 [2024-10-01 13:43:59.843861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.147 [2024-10-01 13:43:59.846270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.147 [2024-10-01 13:43:59.846393] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.147 [2024-10-01 13:43:59.846426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.147 [2024-10-01 13:43:59.846444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.147 [2024-10-01 13:43:59.846477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.147 [2024-10-01 13:43:59.846510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.147 [2024-10-01 13:43:59.846528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.147 [2024-10-01 13:43:59.846559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.147 [2024-10-01 13:43:59.846594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.147 [2024-10-01 13:43:59.853214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.147 [2024-10-01 13:43:59.853332] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.147 [2024-10-01 13:43:59.853365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.147 [2024-10-01 13:43:59.853383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.147 [2024-10-01 13:43:59.853417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.147 [2024-10-01 13:43:59.853449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.147 [2024-10-01 13:43:59.853467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.147 [2024-10-01 13:43:59.853481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.147 [2024-10-01 13:43:59.853513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.147 [2024-10-01 13:43:59.856486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.147 [2024-10-01 13:43:59.856613] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.147 [2024-10-01 13:43:59.856646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.148 [2024-10-01 13:43:59.856665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.148 [2024-10-01 13:43:59.856732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.148 [2024-10-01 13:43:59.857672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.148 [2024-10-01 13:43:59.857710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.148 [2024-10-01 13:43:59.857730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.148 [2024-10-01 13:43:59.857932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.148 [2024-10-01 13:43:59.863353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.148 [2024-10-01 13:43:59.863479] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.148 [2024-10-01 13:43:59.863512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.148 [2024-10-01 13:43:59.863531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.148 [2024-10-01 13:43:59.863584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.148 [2024-10-01 13:43:59.863620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.148 [2024-10-01 13:43:59.863637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.148 [2024-10-01 13:43:59.863651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.148 [2024-10-01 13:43:59.863683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.148 [2024-10-01 13:43:59.867350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.148 [2024-10-01 13:43:59.867465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.148 [2024-10-01 13:43:59.867498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.148 [2024-10-01 13:43:59.867517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.148 [2024-10-01 13:43:59.867567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.148 [2024-10-01 13:43:59.867604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.148 [2024-10-01 13:43:59.867622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.148 [2024-10-01 13:43:59.867637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.148 [2024-10-01 13:43:59.867668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.148 [2024-10-01 13:43:59.874518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.148 [2024-10-01 13:43:59.874676] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.148 [2024-10-01 13:43:59.874714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.148 [2024-10-01 13:43:59.874733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.148 [2024-10-01 13:43:59.874768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.148 [2024-10-01 13:43:59.874802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.148 [2024-10-01 13:43:59.874819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.148 [2024-10-01 13:43:59.874851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.148 [2024-10-01 13:43:59.874888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.148 [2024-10-01 13:43:59.877504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.148 [2024-10-01 13:43:59.877635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.148 [2024-10-01 13:43:59.877668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.148 [2024-10-01 13:43:59.877687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.148 [2024-10-01 13:43:59.877721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.148 [2024-10-01 13:43:59.877753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.148 [2024-10-01 13:43:59.877772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.148 [2024-10-01 13:43:59.877787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.148 [2024-10-01 13:43:59.877819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.148 [2024-10-01 13:43:59.884723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.148 [2024-10-01 13:43:59.884847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.148 [2024-10-01 13:43:59.884882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.148 [2024-10-01 13:43:59.884901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.148 [2024-10-01 13:43:59.884936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.148 [2024-10-01 13:43:59.885865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.148 [2024-10-01 13:43:59.885905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.148 [2024-10-01 13:43:59.885924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.148 [2024-10-01 13:43:59.886146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.148 [2024-10-01 13:43:59.888650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.148 [2024-10-01 13:43:59.888768] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.148 [2024-10-01 13:43:59.888801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.148 [2024-10-01 13:43:59.888820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.148 [2024-10-01 13:43:59.888853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.148 [2024-10-01 13:43:59.888886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.148 [2024-10-01 13:43:59.888904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.148 [2024-10-01 13:43:59.888919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.148 [2024-10-01 13:43:59.888951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.148 [2024-10-01 13:43:59.895528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.148 [2024-10-01 13:43:59.895661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.148 [2024-10-01 13:43:59.895711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.148 [2024-10-01 13:43:59.895732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.148 [2024-10-01 13:43:59.895767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.148 [2024-10-01 13:43:59.895800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.148 [2024-10-01 13:43:59.895818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.148 [2024-10-01 13:43:59.895832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.148 [2024-10-01 13:43:59.895865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.148 [2024-10-01 13:43:59.898820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.148 [2024-10-01 13:43:59.898939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.148 [2024-10-01 13:43:59.898972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.148 [2024-10-01 13:43:59.898991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.148 [2024-10-01 13:43:59.899952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.148 [2024-10-01 13:43:59.900177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.148 [2024-10-01 13:43:59.900215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.148 [2024-10-01 13:43:59.900233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.148 [2024-10-01 13:43:59.900315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.149 [2024-10-01 13:43:59.905638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.149 [2024-10-01 13:43:59.905762] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.149 [2024-10-01 13:43:59.905796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.149 [2024-10-01 13:43:59.905815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.149 [2024-10-01 13:43:59.905848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.149 [2024-10-01 13:43:59.905881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.149 [2024-10-01 13:43:59.905899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.149 [2024-10-01 13:43:59.905913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.149 [2024-10-01 13:43:59.905945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.149 [2024-10-01 13:43:59.909625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.149 [2024-10-01 13:43:59.909744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.149 [2024-10-01 13:43:59.909776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.149 [2024-10-01 13:43:59.909795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.149 [2024-10-01 13:43:59.909828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.149 [2024-10-01 13:43:59.909880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.149 [2024-10-01 13:43:59.909900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.149 [2024-10-01 13:43:59.909915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.149 [2024-10-01 13:43:59.909947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.149 [2024-10-01 13:43:59.916798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.149 [2024-10-01 13:43:59.916917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.149 [2024-10-01 13:43:59.916950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.149 [2024-10-01 13:43:59.916968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.149 [2024-10-01 13:43:59.917002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.150 [2024-10-01 13:43:59.917035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.150 [2024-10-01 13:43:59.917052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.150 [2024-10-01 13:43:59.917068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.150 [2024-10-01 13:43:59.917100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.150 [2024-10-01 13:43:59.919743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.150 [2024-10-01 13:43:59.919860] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.150 [2024-10-01 13:43:59.919905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.150 [2024-10-01 13:43:59.919924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.150 [2024-10-01 13:43:59.919959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.150 [2024-10-01 13:43:59.919991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.150 [2024-10-01 13:43:59.920009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.150 [2024-10-01 13:43:59.920023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.150 [2024-10-01 13:43:59.920055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.150 [2024-10-01 13:43:59.927075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.150 [2024-10-01 13:43:59.927261] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.150 [2024-10-01 13:43:59.927298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.150 [2024-10-01 13:43:59.927317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.150 [2024-10-01 13:43:59.928280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.150 [2024-10-01 13:43:59.928530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.150 [2024-10-01 13:43:59.928580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.150 [2024-10-01 13:43:59.928599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.150 [2024-10-01 13:43:59.928681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.150 [2024-10-01 13:43:59.931027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.150 [2024-10-01 13:43:59.931152] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.150 [2024-10-01 13:43:59.931185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.150 [2024-10-01 13:43:59.931204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.150 [2024-10-01 13:43:59.931238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.150 [2024-10-01 13:43:59.931270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.150 [2024-10-01 13:43:59.931289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.150 [2024-10-01 13:43:59.931303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.150 [2024-10-01 13:43:59.931335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.150 [2024-10-01 13:43:59.937994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.150 [2024-10-01 13:43:59.938114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.150 [2024-10-01 13:43:59.938146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.150 [2024-10-01 13:43:59.938165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.150 [2024-10-01 13:43:59.938199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.150 [2024-10-01 13:43:59.938231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.150 [2024-10-01 13:43:59.938249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.150 [2024-10-01 13:43:59.938263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.150 [2024-10-01 13:43:59.938295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.150 [2024-10-01 13:43:59.941221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.150 [2024-10-01 13:43:59.941336] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.150 [2024-10-01 13:43:59.941368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.150 [2024-10-01 13:43:59.941387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.150 [2024-10-01 13:43:59.941436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.150 [2024-10-01 13:43:59.942367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.150 [2024-10-01 13:43:59.942406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.150 [2024-10-01 13:43:59.942426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.150 [2024-10-01 13:43:59.942629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.150 [2024-10-01 13:43:59.948089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.150 [2024-10-01 13:43:59.948208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.150 [2024-10-01 13:43:59.948241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.150 [2024-10-01 13:43:59.948277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.150 [2024-10-01 13:43:59.948313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.150 [2024-10-01 13:43:59.948346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.150 [2024-10-01 13:43:59.948364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.150 [2024-10-01 13:43:59.948379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.150 [2024-10-01 13:43:59.948411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.150 [2024-10-01 13:43:59.952028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.150 [2024-10-01 13:43:59.952145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.150 [2024-10-01 13:43:59.952177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.150 [2024-10-01 13:43:59.952195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.150 [2024-10-01 13:43:59.952229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.150 [2024-10-01 13:43:59.952261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.150 [2024-10-01 13:43:59.952279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.150 [2024-10-01 13:43:59.952293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.150 [2024-10-01 13:43:59.952325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.150 [2024-10-01 13:43:59.959189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.150 [2024-10-01 13:43:59.959306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.150 [2024-10-01 13:43:59.959339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.150 [2024-10-01 13:43:59.959358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.150 [2024-10-01 13:43:59.959391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.150 [2024-10-01 13:43:59.959423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.150 [2024-10-01 13:43:59.959441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.150 [2024-10-01 13:43:59.959455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.150 [2024-10-01 13:43:59.959487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.150 [2024-10-01 13:43:59.962145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.150 [2024-10-01 13:43:59.962258] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.150 [2024-10-01 13:43:59.962290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.150 [2024-10-01 13:43:59.962308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.150 [2024-10-01 13:43:59.962342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.150 [2024-10-01 13:43:59.962375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.150 [2024-10-01 13:43:59.962410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.150 [2024-10-01 13:43:59.962426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.150 [2024-10-01 13:43:59.962459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.150 [2024-10-01 13:43:59.969285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.150 [2024-10-01 13:43:59.969403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.150 [2024-10-01 13:43:59.969436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.150 [2024-10-01 13:43:59.969454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.150 [2024-10-01 13:43:59.969488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.150 [2024-10-01 13:43:59.970413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.150 [2024-10-01 13:43:59.970453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.150 [2024-10-01 13:43:59.970472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.150 [2024-10-01 13:43:59.970699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.150 [2024-10-01 13:43:59.973198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.150 [2024-10-01 13:43:59.973324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.150 [2024-10-01 13:43:59.973357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.150 [2024-10-01 13:43:59.973375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.151 [2024-10-01 13:43:59.973408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.151 [2024-10-01 13:43:59.973441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.151 [2024-10-01 13:43:59.973459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.151 [2024-10-01 13:43:59.973474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.151 [2024-10-01 13:43:59.973505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.151 [2024-10-01 13:43:59.980225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.151 [2024-10-01 13:43:59.980419] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.151 [2024-10-01 13:43:59.980456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.151 [2024-10-01 13:43:59.980476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.151 [2024-10-01 13:43:59.980515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.151 [2024-10-01 13:43:59.980564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.151 [2024-10-01 13:43:59.980585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.151 [2024-10-01 13:43:59.980602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.151 [2024-10-01 13:43:59.980635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.151 [2024-10-01 13:43:59.983614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.151 [2024-10-01 13:43:59.983774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.151 [2024-10-01 13:43:59.983807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.151 [2024-10-01 13:43:59.983826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.151 [2024-10-01 13:43:59.984792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.151 [2024-10-01 13:43:59.985009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.151 [2024-10-01 13:43:59.985046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.151 [2024-10-01 13:43:59.985064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.151 [2024-10-01 13:43:59.985145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.151 [2024-10-01 13:43:59.990474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.151 [2024-10-01 13:43:59.990606] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.151 [2024-10-01 13:43:59.990639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.151 [2024-10-01 13:43:59.990659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.151 [2024-10-01 13:43:59.990693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.151 [2024-10-01 13:43:59.990726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.151 [2024-10-01 13:43:59.990744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.151 [2024-10-01 13:43:59.990758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.151 [2024-10-01 13:43:59.990790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.151 [2024-10-01 13:43:59.994476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.151 [2024-10-01 13:43:59.994606] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.151 [2024-10-01 13:43:59.994639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.151 [2024-10-01 13:43:59.994658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.151 [2024-10-01 13:43:59.994691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.151 [2024-10-01 13:43:59.994724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.151 [2024-10-01 13:43:59.994742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.151 [2024-10-01 13:43:59.994757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.151 [2024-10-01 13:43:59.994789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.151 [2024-10-01 13:44:00.002018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.151 [2024-10-01 13:44:00.002152] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.151 [2024-10-01 13:44:00.002187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.151 [2024-10-01 13:44:00.002206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.151 [2024-10-01 13:44:00.002262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.151 [2024-10-01 13:44:00.002297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.151 [2024-10-01 13:44:00.002315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.151 [2024-10-01 13:44:00.002330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.151 [2024-10-01 13:44:00.002364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.151 [2024-10-01 13:44:00.004686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.151 [2024-10-01 13:44:00.004832] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.151 [2024-10-01 13:44:00.004869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.151 [2024-10-01 13:44:00.004888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.151 [2024-10-01 13:44:00.004922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.151 [2024-10-01 13:44:00.004956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.151 [2024-10-01 13:44:00.004975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.151 [2024-10-01 13:44:00.004989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.151 [2024-10-01 13:44:00.005022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.151 [2024-10-01 13:44:00.013187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.151 [2024-10-01 13:44:00.013309] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.151 [2024-10-01 13:44:00.013342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.151 [2024-10-01 13:44:00.013361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.151 [2024-10-01 13:44:00.013401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.151 [2024-10-01 13:44:00.014644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.151 [2024-10-01 13:44:00.014687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.151 [2024-10-01 13:44:00.014705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.151 [2024-10-01 13:44:00.015600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.151 [2024-10-01 13:44:00.015890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.151 [2024-10-01 13:44:00.016006] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.151 [2024-10-01 13:44:00.016039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.151 [2024-10-01 13:44:00.016058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.151 [2024-10-01 13:44:00.016092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.151 [2024-10-01 13:44:00.016125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.151 [2024-10-01 13:44:00.016143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.151 [2024-10-01 13:44:00.016173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.151 [2024-10-01 13:44:00.016209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.151 [2024-10-01 13:44:00.023652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.151 [2024-10-01 13:44:00.023850] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.151 [2024-10-01 13:44:00.023899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.151 [2024-10-01 13:44:00.023921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.151 [2024-10-01 13:44:00.023965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.151 [2024-10-01 13:44:00.024000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.151 [2024-10-01 13:44:00.024019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.151 [2024-10-01 13:44:00.024034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.151 [2024-10-01 13:44:00.024067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.151 [2024-10-01 13:44:00.026077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.151 [2024-10-01 13:44:00.026193] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.151 [2024-10-01 13:44:00.026225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.151 [2024-10-01 13:44:00.026244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.151 [2024-10-01 13:44:00.026277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.151 [2024-10-01 13:44:00.027207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.151 [2024-10-01 13:44:00.027246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.151 [2024-10-01 13:44:00.027265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.151 [2024-10-01 13:44:00.027461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.151 [2024-10-01 13:44:00.033748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.151 [2024-10-01 13:44:00.033866] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.152 [2024-10-01 13:44:00.033898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.152 [2024-10-01 13:44:00.033917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.152 [2024-10-01 13:44:00.033950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.152 [2024-10-01 13:44:00.033984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.152 [2024-10-01 13:44:00.034002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.152 [2024-10-01 13:44:00.034017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.152 [2024-10-01 13:44:00.034049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.152 [2024-10-01 13:44:00.036927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.152 [2024-10-01 13:44:00.037045] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.152 [2024-10-01 13:44:00.037101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.152 [2024-10-01 13:44:00.037122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.152 [2024-10-01 13:44:00.037157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.152 [2024-10-01 13:44:00.037190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.152 [2024-10-01 13:44:00.037208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.152 [2024-10-01 13:44:00.037222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.152 [2024-10-01 13:44:00.037254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.152 [2024-10-01 13:44:00.044087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.152 [2024-10-01 13:44:00.044215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.152 [2024-10-01 13:44:00.044249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.152 [2024-10-01 13:44:00.044268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.152 [2024-10-01 13:44:00.044302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.152 [2024-10-01 13:44:00.044335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.152 [2024-10-01 13:44:00.044353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.152 [2024-10-01 13:44:00.044367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.152 [2024-10-01 13:44:00.044400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.152 [2024-10-01 13:44:00.047055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.152 [2024-10-01 13:44:00.047186] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.152 [2024-10-01 13:44:00.047218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.152 [2024-10-01 13:44:00.047237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.152 [2024-10-01 13:44:00.047271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.152 [2024-10-01 13:44:00.047304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.152 [2024-10-01 13:44:00.047323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.152 [2024-10-01 13:44:00.047337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.152 [2024-10-01 13:44:00.047369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.152 [2024-10-01 13:44:00.054284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.152 [2024-10-01 13:44:00.054429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.152 [2024-10-01 13:44:00.054463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.152 [2024-10-01 13:44:00.054483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.152 [2024-10-01 13:44:00.054518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.152 [2024-10-01 13:44:00.055489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.152 [2024-10-01 13:44:00.055529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.152 [2024-10-01 13:44:00.055561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.152 [2024-10-01 13:44:00.055790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.152 [2024-10-01 13:44:00.058223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.152 [2024-10-01 13:44:00.058341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.152 [2024-10-01 13:44:00.058373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.152 [2024-10-01 13:44:00.058391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.152 [2024-10-01 13:44:00.058425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.152 [2024-10-01 13:44:00.058458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.152 [2024-10-01 13:44:00.058476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.152 [2024-10-01 13:44:00.058491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.152 [2024-10-01 13:44:00.058524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.152 [2024-10-01 13:44:00.065194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.152 [2024-10-01 13:44:00.065357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.152 [2024-10-01 13:44:00.065393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.152 [2024-10-01 13:44:00.065412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.152 [2024-10-01 13:44:00.065450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.152 [2024-10-01 13:44:00.065483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.152 [2024-10-01 13:44:00.065501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.152 [2024-10-01 13:44:00.065516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.152 [2024-10-01 13:44:00.065565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.152 [2024-10-01 13:44:00.068521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.152 [2024-10-01 13:44:00.068657] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.152 [2024-10-01 13:44:00.068690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.152 [2024-10-01 13:44:00.068708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.152 [2024-10-01 13:44:00.069655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.152 [2024-10-01 13:44:00.069881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.152 [2024-10-01 13:44:00.069918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.152 [2024-10-01 13:44:00.069936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.152 [2024-10-01 13:44:00.070017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.152 [2024-10-01 13:44:00.075342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.152 [2024-10-01 13:44:00.075480] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.152 [2024-10-01 13:44:00.075523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.152 [2024-10-01 13:44:00.075560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.153 [2024-10-01 13:44:00.075597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.153 [2024-10-01 13:44:00.075630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.153 [2024-10-01 13:44:00.075648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.153 [2024-10-01 13:44:00.075663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.153 [2024-10-01 13:44:00.075694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.153 [2024-10-01 13:44:00.079366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.153 [2024-10-01 13:44:00.079561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.153 [2024-10-01 13:44:00.079599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.153 [2024-10-01 13:44:00.079619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.153 [2024-10-01 13:44:00.079657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.153 [2024-10-01 13:44:00.079690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.153 [2024-10-01 13:44:00.079709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.153 [2024-10-01 13:44:00.079724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.153 [2024-10-01 13:44:00.079759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.153 [2024-10-01 13:44:00.086745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.153 [2024-10-01 13:44:00.086948] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.153 [2024-10-01 13:44:00.086985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.153 [2024-10-01 13:44:00.087005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.153 [2024-10-01 13:44:00.087042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.153 [2024-10-01 13:44:00.087076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.153 [2024-10-01 13:44:00.087094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.153 [2024-10-01 13:44:00.087110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.153 [2024-10-01 13:44:00.087143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.153 [2024-10-01 13:44:00.089756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.153 [2024-10-01 13:44:00.089894] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.153 [2024-10-01 13:44:00.089928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.153 [2024-10-01 13:44:00.089972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.153 [2024-10-01 13:44:00.090009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.153 [2024-10-01 13:44:00.090042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.153 [2024-10-01 13:44:00.090061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.153 [2024-10-01 13:44:00.090076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.153 [2024-10-01 13:44:00.090109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.153 [2024-10-01 13:44:00.097035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.153 [2024-10-01 13:44:00.097196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.153 [2024-10-01 13:44:00.097232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.153 [2024-10-01 13:44:00.097251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.153 [2024-10-01 13:44:00.098206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.153 [2024-10-01 13:44:00.098453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.153 [2024-10-01 13:44:00.098492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.153 [2024-10-01 13:44:00.098511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.153 [2024-10-01 13:44:00.098609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.153 [2024-10-01 13:44:00.100978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.153 [2024-10-01 13:44:00.101096] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.153 [2024-10-01 13:44:00.101134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.153 [2024-10-01 13:44:00.101155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.153 [2024-10-01 13:44:00.101188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.153 [2024-10-01 13:44:00.101221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.153 [2024-10-01 13:44:00.101239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.153 [2024-10-01 13:44:00.101254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.153 [2024-10-01 13:44:00.101286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.153 [2024-10-01 13:44:00.107899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.153 [2024-10-01 13:44:00.108018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.153 [2024-10-01 13:44:00.108053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.153 [2024-10-01 13:44:00.108071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.153 [2024-10-01 13:44:00.108105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.153 [2024-10-01 13:44:00.108138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.153 [2024-10-01 13:44:00.108179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.153 [2024-10-01 13:44:00.108195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.153 [2024-10-01 13:44:00.108229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.153 [2024-10-01 13:44:00.111133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.153 [2024-10-01 13:44:00.111248] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.153 [2024-10-01 13:44:00.111283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.153 [2024-10-01 13:44:00.111302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.153 [2024-10-01 13:44:00.111351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.153 [2024-10-01 13:44:00.112298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.153 [2024-10-01 13:44:00.112339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.153 [2024-10-01 13:44:00.112358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.153 [2024-10-01 13:44:00.112573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.153 [2024-10-01 13:44:00.118106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.153 [2024-10-01 13:44:00.118307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.153 [2024-10-01 13:44:00.118372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.153 [2024-10-01 13:44:00.118411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.153 [2024-10-01 13:44:00.118469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.153 [2024-10-01 13:44:00.118523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.153 [2024-10-01 13:44:00.118570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.153 [2024-10-01 13:44:00.118588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.153 [2024-10-01 13:44:00.118625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.153 [2024-10-01 13:44:00.121986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.153 [2024-10-01 13:44:00.122111] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.153 [2024-10-01 13:44:00.122157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.153 [2024-10-01 13:44:00.122179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.153 [2024-10-01 13:44:00.122213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.153 [2024-10-01 13:44:00.122246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.153 [2024-10-01 13:44:00.122265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.153 [2024-10-01 13:44:00.122280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.153 [2024-10-01 13:44:00.122312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.153 [2024-10-01 13:44:00.129135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.153 [2024-10-01 13:44:00.129277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.154 [2024-10-01 13:44:00.129324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.154 [2024-10-01 13:44:00.129345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.154 [2024-10-01 13:44:00.129380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.154 [2024-10-01 13:44:00.129413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.154 [2024-10-01 13:44:00.129432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.154 [2024-10-01 13:44:00.129447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.154 [2024-10-01 13:44:00.129479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.154 [2024-10-01 13:44:00.132111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.154 [2024-10-01 13:44:00.132236] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.154 [2024-10-01 13:44:00.132279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.154 [2024-10-01 13:44:00.132300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.154 [2024-10-01 13:44:00.132334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.154 [2024-10-01 13:44:00.132367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.154 [2024-10-01 13:44:00.132385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.154 [2024-10-01 13:44:00.132400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.154 [2024-10-01 13:44:00.132432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.154 [2024-10-01 13:44:00.139292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.154 [2024-10-01 13:44:00.139411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.154 [2024-10-01 13:44:00.139444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.154 [2024-10-01 13:44:00.139463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.154 [2024-10-01 13:44:00.139497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.154 [2024-10-01 13:44:00.139530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.154 [2024-10-01 13:44:00.139564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.154 [2024-10-01 13:44:00.139580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.154 [2024-10-01 13:44:00.140504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.154 [2024-10-01 13:44:00.143240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.154 [2024-10-01 13:44:00.143357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.154 [2024-10-01 13:44:00.143400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.154 [2024-10-01 13:44:00.143421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.154 [2024-10-01 13:44:00.143475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.154 [2024-10-01 13:44:00.143509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.154 [2024-10-01 13:44:00.143527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.154 [2024-10-01 13:44:00.143557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.154 [2024-10-01 13:44:00.143592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.154 [2024-10-01 13:44:00.150088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.154 [2024-10-01 13:44:00.150218] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.154 [2024-10-01 13:44:00.150251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.154 [2024-10-01 13:44:00.150269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.154 [2024-10-01 13:44:00.150303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.154 [2024-10-01 13:44:00.150335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.154 [2024-10-01 13:44:00.150354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.154 [2024-10-01 13:44:00.150368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.154 [2024-10-01 13:44:00.150400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.154 [2024-10-01 13:44:00.153331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.154 [2024-10-01 13:44:00.153447] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.154 [2024-10-01 13:44:00.153489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.154 [2024-10-01 13:44:00.153508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.154 [2024-10-01 13:44:00.153574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.154 [2024-10-01 13:44:00.154493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.154 [2024-10-01 13:44:00.154546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.154 [2024-10-01 13:44:00.154568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.154 [2024-10-01 13:44:00.154757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.154 [2024-10-01 13:44:00.160188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.154 [2024-10-01 13:44:00.160304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.154 [2024-10-01 13:44:00.160337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.154 [2024-10-01 13:44:00.160356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.154 [2024-10-01 13:44:00.160390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.154 [2024-10-01 13:44:00.160423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.154 [2024-10-01 13:44:00.160441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.154 [2024-10-01 13:44:00.160474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.154 [2024-10-01 13:44:00.160510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.154 [2024-10-01 13:44:00.164204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.154 [2024-10-01 13:44:00.164323] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.154 [2024-10-01 13:44:00.164356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.154 [2024-10-01 13:44:00.164375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.154 [2024-10-01 13:44:00.164408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.154 [2024-10-01 13:44:00.164440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.154 [2024-10-01 13:44:00.164458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.154 [2024-10-01 13:44:00.164473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.154 [2024-10-01 13:44:00.164505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.154 [2024-10-01 13:44:00.171349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.154 [2024-10-01 13:44:00.171476] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.154 [2024-10-01 13:44:00.171509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.154 [2024-10-01 13:44:00.171528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.154 [2024-10-01 13:44:00.171579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.154 [2024-10-01 13:44:00.171613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.154 [2024-10-01 13:44:00.171632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.154 [2024-10-01 13:44:00.171647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.154 [2024-10-01 13:44:00.171679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.154 [2024-10-01 13:44:00.174361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.154 [2024-10-01 13:44:00.174475] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.154 [2024-10-01 13:44:00.174508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.154 [2024-10-01 13:44:00.174527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.155 [2024-10-01 13:44:00.174578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.155 [2024-10-01 13:44:00.174613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.155 [2024-10-01 13:44:00.174631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.155 [2024-10-01 13:44:00.174646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.155 [2024-10-01 13:44:00.174678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.155 [2024-10-01 13:44:00.181580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.155 [2024-10-01 13:44:00.181698] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.155 [2024-10-01 13:44:00.181750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.155 [2024-10-01 13:44:00.181771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.155 [2024-10-01 13:44:00.182702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.155 [2024-10-01 13:44:00.182933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.155 [2024-10-01 13:44:00.182971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.155 [2024-10-01 13:44:00.182990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.155 [2024-10-01 13:44:00.183069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.155 [2024-10-01 13:44:00.185481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.155 [2024-10-01 13:44:00.185621] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.155 [2024-10-01 13:44:00.185655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.155 [2024-10-01 13:44:00.185673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.155 [2024-10-01 13:44:00.185706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.155 [2024-10-01 13:44:00.185739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.155 [2024-10-01 13:44:00.185757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.155 [2024-10-01 13:44:00.185771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.155 [2024-10-01 13:44:00.185803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.155 [2024-10-01 13:44:00.192457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.155 [2024-10-01 13:44:00.192587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.155 [2024-10-01 13:44:00.192621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.155 [2024-10-01 13:44:00.192639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.155 [2024-10-01 13:44:00.192674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.155 [2024-10-01 13:44:00.192707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.155 [2024-10-01 13:44:00.192725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.155 [2024-10-01 13:44:00.192739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.155 [2024-10-01 13:44:00.192772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.155 [2024-10-01 13:44:00.195722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.155 [2024-10-01 13:44:00.195835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.155 [2024-10-01 13:44:00.195867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.155 [2024-10-01 13:44:00.195898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.155 [2024-10-01 13:44:00.195948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.155 [2024-10-01 13:44:00.196900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.155 [2024-10-01 13:44:00.196940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.155 [2024-10-01 13:44:00.196959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.155 [2024-10-01 13:44:00.197150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.155 [2024-10-01 13:44:00.202585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.155 [2024-10-01 13:44:00.202701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.155 [2024-10-01 13:44:00.202735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.155 [2024-10-01 13:44:00.202753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.155 [2024-10-01 13:44:00.202787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.155 [2024-10-01 13:44:00.202819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.155 [2024-10-01 13:44:00.202838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.155 [2024-10-01 13:44:00.202852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.155 [2024-10-01 13:44:00.202883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.155 [2024-10-01 13:44:00.206760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.155 [2024-10-01 13:44:00.206876] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.155 [2024-10-01 13:44:00.206909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.155 [2024-10-01 13:44:00.206928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.155 [2024-10-01 13:44:00.206961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.155 [2024-10-01 13:44:00.206994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.155 [2024-10-01 13:44:00.207013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.155 [2024-10-01 13:44:00.207027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.155 [2024-10-01 13:44:00.207059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.155 [2024-10-01 13:44:00.213230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.155 [2024-10-01 13:44:00.213348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.155 [2024-10-01 13:44:00.213380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.155 [2024-10-01 13:44:00.213399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.155 [2024-10-01 13:44:00.213432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.155 [2024-10-01 13:44:00.213465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.155 [2024-10-01 13:44:00.213483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.155 [2024-10-01 13:44:00.213497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.155 [2024-10-01 13:44:00.213566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.155 [2024-10-01 13:44:00.216856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.155 [2024-10-01 13:44:00.216972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.155 [2024-10-01 13:44:00.217004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.155 [2024-10-01 13:44:00.217023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.155 [2024-10-01 13:44:00.217056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.155 [2024-10-01 13:44:00.217089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.155 [2024-10-01 13:44:00.217107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.155 [2024-10-01 13:44:00.217122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.155 [2024-10-01 13:44:00.217154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.155 [2024-10-01 13:44:00.224089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.155 [2024-10-01 13:44:00.224208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.155 [2024-10-01 13:44:00.224241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.155 [2024-10-01 13:44:00.224260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.155 [2024-10-01 13:44:00.224293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.155 [2024-10-01 13:44:00.224343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.155 [2024-10-01 13:44:00.224366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.155 [2024-10-01 13:44:00.224381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.156 [2024-10-01 13:44:00.224413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.156 [2024-10-01 13:44:00.227528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.156 [2024-10-01 13:44:00.227656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.156 [2024-10-01 13:44:00.227689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.156 [2024-10-01 13:44:00.227707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.156 [2024-10-01 13:44:00.227740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.156 [2024-10-01 13:44:00.227772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.156 [2024-10-01 13:44:00.227791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.156 [2024-10-01 13:44:00.227806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.156 [2024-10-01 13:44:00.228744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.156 [2024-10-01 13:44:00.234485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.156 [2024-10-01 13:44:00.234618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.156 [2024-10-01 13:44:00.234652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.156 [2024-10-01 13:44:00.234690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.156 [2024-10-01 13:44:00.234726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.156 [2024-10-01 13:44:00.234759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.156 [2024-10-01 13:44:00.234777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.156 [2024-10-01 13:44:00.234792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.156 [2024-10-01 13:44:00.234824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.156 [2024-10-01 13:44:00.238616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.156 [2024-10-01 13:44:00.238734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.156 [2024-10-01 13:44:00.238766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.156 [2024-10-01 13:44:00.238785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.156 [2024-10-01 13:44:00.238818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.156 [2024-10-01 13:44:00.238850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.156 [2024-10-01 13:44:00.238868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.156 [2024-10-01 13:44:00.238883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.156 [2024-10-01 13:44:00.238915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.156 [2024-10-01 13:44:00.245613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.156 [2024-10-01 13:44:00.245875] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.156 [2024-10-01 13:44:00.245922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.156 [2024-10-01 13:44:00.245941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.156 [2024-10-01 13:44:00.245983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.156 [2024-10-01 13:44:00.246036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.156 [2024-10-01 13:44:00.246059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.156 [2024-10-01 13:44:00.246074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.156 [2024-10-01 13:44:00.246107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.156 [2024-10-01 13:44:00.248801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.156 [2024-10-01 13:44:00.248916] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.156 [2024-10-01 13:44:00.248949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.156 [2024-10-01 13:44:00.248967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.156 [2024-10-01 13:44:00.248999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.156 [2024-10-01 13:44:00.249031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.156 [2024-10-01 13:44:00.249065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.156 [2024-10-01 13:44:00.249081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.156 [2024-10-01 13:44:00.249114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.156 [2024-10-01 13:44:00.256071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.156 [2024-10-01 13:44:00.256189] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.156 [2024-10-01 13:44:00.256224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.156 [2024-10-01 13:44:00.256243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.156 [2024-10-01 13:44:00.256276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.156 [2024-10-01 13:44:00.256309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.156 [2024-10-01 13:44:00.256327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.156 [2024-10-01 13:44:00.256342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.156 [2024-10-01 13:44:00.257290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.156 [2024-10-01 13:44:00.259216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.156 [2024-10-01 13:44:00.259337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.156 [2024-10-01 13:44:00.259371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.156 [2024-10-01 13:44:00.259390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.156 [2024-10-01 13:44:00.259423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.156 [2024-10-01 13:44:00.259456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.156 [2024-10-01 13:44:00.259474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.156 [2024-10-01 13:44:00.259489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.156 [2024-10-01 13:44:00.259521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.156 [2024-10-01 13:44:00.267511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.156 [2024-10-01 13:44:00.268283] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.156 [2024-10-01 13:44:00.268332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.156 [2024-10-01 13:44:00.268354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.156 [2024-10-01 13:44:00.268449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.156 [2024-10-01 13:44:00.268489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.156 [2024-10-01 13:44:00.268508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.156 [2024-10-01 13:44:00.268522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.156 [2024-10-01 13:44:00.268573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.156 [2024-10-01 13:44:00.269315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.156 [2024-10-01 13:44:00.269454] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.156 [2024-10-01 13:44:00.269487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.156 [2024-10-01 13:44:00.269506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.156 [2024-10-01 13:44:00.269555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.156 [2024-10-01 13:44:00.269593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.156 [2024-10-01 13:44:00.269613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.156 [2024-10-01 13:44:00.269628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.156 [2024-10-01 13:44:00.269660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.156 [2024-10-01 13:44:00.278455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.156 [2024-10-01 13:44:00.278593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.156 [2024-10-01 13:44:00.278628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.156 [2024-10-01 13:44:00.278647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.156 [2024-10-01 13:44:00.278682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.156 [2024-10-01 13:44:00.278714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.157 [2024-10-01 13:44:00.278732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.157 [2024-10-01 13:44:00.278747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.157 [2024-10-01 13:44:00.278779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.157 [2024-10-01 13:44:00.279426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.157 [2024-10-01 13:44:00.279529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.157 [2024-10-01 13:44:00.279574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.157 [2024-10-01 13:44:00.279592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.157 [2024-10-01 13:44:00.279626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.157 [2024-10-01 13:44:00.279676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.157 [2024-10-01 13:44:00.279699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.157 [2024-10-01 13:44:00.279713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.157 [2024-10-01 13:44:00.279745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.157 [2024-10-01 13:44:00.289786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.157 [2024-10-01 13:44:00.289870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.157 [2024-10-01 13:44:00.289957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.157 [2024-10-01 13:44:00.289989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.157 [2024-10-01 13:44:00.290027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.157 [2024-10-01 13:44:00.290100] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.157 [2024-10-01 13:44:00.290130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.157 [2024-10-01 13:44:00.290147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.157 [2024-10-01 13:44:00.290166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.157 [2024-10-01 13:44:00.290200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.157 [2024-10-01 13:44:00.290221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.157 [2024-10-01 13:44:00.290236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.157 [2024-10-01 13:44:00.290250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.157 [2024-10-01 13:44:00.290283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.157 [2024-10-01 13:44:00.290303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.157 [2024-10-01 13:44:00.290318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.157 [2024-10-01 13:44:00.290332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.157 [2024-10-01 13:44:00.290362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.157 [2024-10-01 13:44:00.299944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.157 [2024-10-01 13:44:00.300026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.157 [2024-10-01 13:44:00.300112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.157 [2024-10-01 13:44:00.300142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.157 [2024-10-01 13:44:00.300161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.157 [2024-10-01 13:44:00.301147] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.157 [2024-10-01 13:44:00.301192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.157 [2024-10-01 13:44:00.301214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.157 [2024-10-01 13:44:00.301235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.157 [2024-10-01 13:44:00.301445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.157 [2024-10-01 13:44:00.301476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.157 [2024-10-01 13:44:00.301492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.157 [2024-10-01 13:44:00.301507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.157 [2024-10-01 13:44:00.302792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.157 [2024-10-01 13:44:00.302831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.157 [2024-10-01 13:44:00.302850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.157 [2024-10-01 13:44:00.302888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.157 [2024-10-01 13:44:00.303766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.157 [2024-10-01 13:44:00.310826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.157 [2024-10-01 13:44:00.310898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.157 [2024-10-01 13:44:00.311024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.157 [2024-10-01 13:44:00.311070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.157 [2024-10-01 13:44:00.311100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.157 [2024-10-01 13:44:00.311171] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.157 [2024-10-01 13:44:00.311204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.157 [2024-10-01 13:44:00.311223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.157 [2024-10-01 13:44:00.311259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.157 [2024-10-01 13:44:00.311284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.157 [2024-10-01 13:44:00.311589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.157 [2024-10-01 13:44:00.311628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.157 [2024-10-01 13:44:00.311647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.157 [2024-10-01 13:44:00.311665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.157 [2024-10-01 13:44:00.311682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.157 [2024-10-01 13:44:00.311695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.157 [2024-10-01 13:44:00.311844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.157 [2024-10-01 13:44:00.311871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.157 [2024-10-01 13:44:00.320994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.157 [2024-10-01 13:44:00.321077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.157 [2024-10-01 13:44:00.321166] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.157 [2024-10-01 13:44:00.321204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.157 [2024-10-01 13:44:00.321224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.157 [2024-10-01 13:44:00.321296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.157 [2024-10-01 13:44:00.321325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.157 [2024-10-01 13:44:00.321343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.158 [2024-10-01 13:44:00.321363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.158 [2024-10-01 13:44:00.321397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.158 [2024-10-01 13:44:00.321418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.158 [2024-10-01 13:44:00.321458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.158 [2024-10-01 13:44:00.321475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.158 [2024-10-01 13:44:00.321509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.158 [2024-10-01 13:44:00.321530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.158 [2024-10-01 13:44:00.321564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.158 [2024-10-01 13:44:00.321579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.158 [2024-10-01 13:44:00.321611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.158 [2024-10-01 13:44:00.332269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.158 [2024-10-01 13:44:00.332388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.158 [2024-10-01 13:44:00.332498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.158 [2024-10-01 13:44:00.332532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.158 [2024-10-01 13:44:00.332570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.158 [2024-10-01 13:44:00.332643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.158 [2024-10-01 13:44:00.332672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.158 [2024-10-01 13:44:00.332690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.158 [2024-10-01 13:44:00.332712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.158 [2024-10-01 13:44:00.332745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.158 [2024-10-01 13:44:00.332766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.158 [2024-10-01 13:44:00.332781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.158 [2024-10-01 13:44:00.332798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.158 [2024-10-01 13:44:00.332831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.158 [2024-10-01 13:44:00.332852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.158 [2024-10-01 13:44:00.332866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.158 [2024-10-01 13:44:00.332881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.158 [2024-10-01 13:44:00.332910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.158 [2024-10-01 13:44:00.342559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.158 [2024-10-01 13:44:00.342610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.158 [2024-10-01 13:44:00.342710] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.158 [2024-10-01 13:44:00.342743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.158 [2024-10-01 13:44:00.342761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.158 [2024-10-01 13:44:00.342844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.158 [2024-10-01 13:44:00.342871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.158 [2024-10-01 13:44:00.342888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.158 [2024-10-01 13:44:00.343821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.158 [2024-10-01 13:44:00.343868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.158 [2024-10-01 13:44:00.344090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.158 [2024-10-01 13:44:00.344127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.158 [2024-10-01 13:44:00.344145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.158 [2024-10-01 13:44:00.344164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.158 [2024-10-01 13:44:00.344180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.158 [2024-10-01 13:44:00.344194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.158 [2024-10-01 13:44:00.344307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.158 [2024-10-01 13:44:00.344330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.158 [2024-10-01 13:44:00.353516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.158 [2024-10-01 13:44:00.353627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.158 [2024-10-01 13:44:00.353762] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.158 [2024-10-01 13:44:00.353798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.158 [2024-10-01 13:44:00.353818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.158 [2024-10-01 13:44:00.353870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.158 [2024-10-01 13:44:00.353896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.158 [2024-10-01 13:44:00.353913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.158 [2024-10-01 13:44:00.353950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.158 [2024-10-01 13:44:00.353974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.158 [2024-10-01 13:44:00.354236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.158 [2024-10-01 13:44:00.354265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.158 [2024-10-01 13:44:00.354282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.158 [2024-10-01 13:44:00.354299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.158 [2024-10-01 13:44:00.354314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.158 [2024-10-01 13:44:00.354328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.158 [2024-10-01 13:44:00.354480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.158 [2024-10-01 13:44:00.354528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.158 [2024-10-01 13:44:00.363711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.158 [2024-10-01 13:44:00.363790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.158 [2024-10-01 13:44:00.363885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.158 [2024-10-01 13:44:00.363918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.158 [2024-10-01 13:44:00.363937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.158 [2024-10-01 13:44:00.364007] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.158 [2024-10-01 13:44:00.364035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.158 [2024-10-01 13:44:00.364052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.158 [2024-10-01 13:44:00.364072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.158 [2024-10-01 13:44:00.364105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.158 [2024-10-01 13:44:00.364126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.158 [2024-10-01 13:44:00.364140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.158 [2024-10-01 13:44:00.364155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.158 [2024-10-01 13:44:00.364187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.158 [2024-10-01 13:44:00.364207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.159 [2024-10-01 13:44:00.364231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.159 [2024-10-01 13:44:00.364245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.159 [2024-10-01 13:44:00.364275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.159 [2024-10-01 13:44:00.374848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.159 [2024-10-01 13:44:00.374900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.159 [2024-10-01 13:44:00.374999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.159 [2024-10-01 13:44:00.375032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.159 [2024-10-01 13:44:00.375050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.159 [2024-10-01 13:44:00.375100] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.159 [2024-10-01 13:44:00.375126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.159 [2024-10-01 13:44:00.375142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.159 [2024-10-01 13:44:00.375175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.159 [2024-10-01 13:44:00.375199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.159 [2024-10-01 13:44:00.375226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.159 [2024-10-01 13:44:00.375243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.159 [2024-10-01 13:44:00.375275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.159 [2024-10-01 13:44:00.375293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.159 [2024-10-01 13:44:00.375309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.159 [2024-10-01 13:44:00.375323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.159 [2024-10-01 13:44:00.375357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.159 [2024-10-01 13:44:00.375377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.159 [2024-10-01 13:44:00.385040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.159 [2024-10-01 13:44:00.385094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.159 [2024-10-01 13:44:00.385195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.159 [2024-10-01 13:44:00.385228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.159 [2024-10-01 13:44:00.385247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.159 [2024-10-01 13:44:00.385298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.159 [2024-10-01 13:44:00.385324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.159 [2024-10-01 13:44:00.385341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.159 [2024-10-01 13:44:00.386272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.159 [2024-10-01 13:44:00.386318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.159 [2024-10-01 13:44:00.386550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.159 [2024-10-01 13:44:00.386587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.159 [2024-10-01 13:44:00.386606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.159 [2024-10-01 13:44:00.386624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.159 [2024-10-01 13:44:00.386640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.159 [2024-10-01 13:44:00.386653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.159 [2024-10-01 13:44:00.386767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.159 [2024-10-01 13:44:00.386789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.159 [2024-10-01 13:44:00.395912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.159 [2024-10-01 13:44:00.395963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.159 [2024-10-01 13:44:00.396064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.159 [2024-10-01 13:44:00.396095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.159 [2024-10-01 13:44:00.396113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.159 [2024-10-01 13:44:00.396164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.159 [2024-10-01 13:44:00.396190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.159 [2024-10-01 13:44:00.396232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.159 [2024-10-01 13:44:00.396268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.159 [2024-10-01 13:44:00.396291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.159 [2024-10-01 13:44:00.396318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.159 [2024-10-01 13:44:00.396336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.159 [2024-10-01 13:44:00.396351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.159 [2024-10-01 13:44:00.396367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.159 [2024-10-01 13:44:00.396383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.159 [2024-10-01 13:44:00.396397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.159 [2024-10-01 13:44:00.396689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.159 [2024-10-01 13:44:00.396718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.159 [2024-10-01 13:44:00.406214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.159 [2024-10-01 13:44:00.406313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.159 [2024-10-01 13:44:00.406452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.159 [2024-10-01 13:44:00.406489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.159 [2024-10-01 13:44:00.406509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.159 [2024-10-01 13:44:00.406581] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.159 [2024-10-01 13:44:00.406609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.159 [2024-10-01 13:44:00.406626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.159 [2024-10-01 13:44:00.406663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.159 [2024-10-01 13:44:00.406688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.159 [2024-10-01 13:44:00.406715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.159 [2024-10-01 13:44:00.406733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.159 [2024-10-01 13:44:00.406749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.159 [2024-10-01 13:44:00.406767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.159 [2024-10-01 13:44:00.406783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.159 [2024-10-01 13:44:00.406796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.159 [2024-10-01 13:44:00.406829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.159 [2024-10-01 13:44:00.406849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.159 [2024-10-01 13:44:00.417379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.159 [2024-10-01 13:44:00.417476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.159 [2024-10-01 13:44:00.417761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.159 [2024-10-01 13:44:00.417809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.160 [2024-10-01 13:44:00.417831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.160 [2024-10-01 13:44:00.417885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.160 [2024-10-01 13:44:00.417910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.160 [2024-10-01 13:44:00.417927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.160 [2024-10-01 13:44:00.417971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.160 [2024-10-01 13:44:00.417997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.160 [2024-10-01 13:44:00.418025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.160 [2024-10-01 13:44:00.418044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.160 [2024-10-01 13:44:00.418060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.160 [2024-10-01 13:44:00.418079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.160 [2024-10-01 13:44:00.418094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.160 [2024-10-01 13:44:00.418108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.160 [2024-10-01 13:44:00.418141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.160 [2024-10-01 13:44:00.418162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.160 [2024-10-01 13:44:00.427966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.160 [2024-10-01 13:44:00.428018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.160 [2024-10-01 13:44:00.428118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.160 [2024-10-01 13:44:00.428150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.160 [2024-10-01 13:44:00.428169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.160 [2024-10-01 13:44:00.428220] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.160 [2024-10-01 13:44:00.428246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.160 [2024-10-01 13:44:00.428263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.160 [2024-10-01 13:44:00.429194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.160 [2024-10-01 13:44:00.429240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.160 [2024-10-01 13:44:00.429460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.160 [2024-10-01 13:44:00.429499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.160 [2024-10-01 13:44:00.429518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.160 [2024-10-01 13:44:00.429567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.160 [2024-10-01 13:44:00.429588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.160 [2024-10-01 13:44:00.429602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.160 [2024-10-01 13:44:00.429717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.160 [2024-10-01 13:44:00.429740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.160 [2024-10-01 13:44:00.439146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.160 [2024-10-01 13:44:00.439229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.160 [2024-10-01 13:44:00.439356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.160 [2024-10-01 13:44:00.439391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.160 [2024-10-01 13:44:00.439411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.160 [2024-10-01 13:44:00.439464] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.160 [2024-10-01 13:44:00.439490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.160 [2024-10-01 13:44:00.439506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.160 [2024-10-01 13:44:00.439557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.160 [2024-10-01 13:44:00.439585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.160 [2024-10-01 13:44:00.439614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.160 [2024-10-01 13:44:00.439634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.160 [2024-10-01 13:44:00.439649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.160 [2024-10-01 13:44:00.439667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.160 [2024-10-01 13:44:00.439683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.160 [2024-10-01 13:44:00.439697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.160 [2024-10-01 13:44:00.439729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.160 [2024-10-01 13:44:00.439749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.160 [2024-10-01 13:44:00.449569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.160 [2024-10-01 13:44:00.449619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.160 [2024-10-01 13:44:00.449732] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.160 [2024-10-01 13:44:00.449764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.160 [2024-10-01 13:44:00.449783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.160 [2024-10-01 13:44:00.449834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.160 [2024-10-01 13:44:00.449859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.160 [2024-10-01 13:44:00.449900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.160 [2024-10-01 13:44:00.449937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.160 [2024-10-01 13:44:00.449961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.160 [2024-10-01 13:44:00.449988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.160 [2024-10-01 13:44:00.450007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.160 [2024-10-01 13:44:00.450022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.160 [2024-10-01 13:44:00.450039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.160 [2024-10-01 13:44:00.450055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.160 [2024-10-01 13:44:00.450068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.160 [2024-10-01 13:44:00.450100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.160 [2024-10-01 13:44:00.450120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.160 [2024-10-01 13:44:00.461014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.160 [2024-10-01 13:44:00.461069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.160 [2024-10-01 13:44:00.461178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.160 [2024-10-01 13:44:00.461211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.160 [2024-10-01 13:44:00.461229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.160 [2024-10-01 13:44:00.461280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.160 [2024-10-01 13:44:00.461306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.160 [2024-10-01 13:44:00.461322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.160 [2024-10-01 13:44:00.461356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.160 [2024-10-01 13:44:00.461380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.160 [2024-10-01 13:44:00.461407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.161 [2024-10-01 13:44:00.461425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.161 [2024-10-01 13:44:00.461440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.161 [2024-10-01 13:44:00.461457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.161 [2024-10-01 13:44:00.461473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.161 [2024-10-01 13:44:00.461486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.161 [2024-10-01 13:44:00.461518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.161 [2024-10-01 13:44:00.461552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.161 [2024-10-01 13:44:00.471218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.161 [2024-10-01 13:44:00.471269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.161 [2024-10-01 13:44:00.471388] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.161 [2024-10-01 13:44:00.471435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.161 [2024-10-01 13:44:00.471456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.161 [2024-10-01 13:44:00.471508] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.161 [2024-10-01 13:44:00.471549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.161 [2024-10-01 13:44:00.471571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.161 [2024-10-01 13:44:00.472516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.161 [2024-10-01 13:44:00.472575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.161 [2024-10-01 13:44:00.472781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.161 [2024-10-01 13:44:00.472818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.161 [2024-10-01 13:44:00.472836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.161 [2024-10-01 13:44:00.472854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.161 [2024-10-01 13:44:00.472870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.161 [2024-10-01 13:44:00.472884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.161 [2024-10-01 13:44:00.472996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.161 [2024-10-01 13:44:00.473018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.161 [2024-10-01 13:44:00.482066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.161 [2024-10-01 13:44:00.482116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.161 [2024-10-01 13:44:00.482214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.161 [2024-10-01 13:44:00.482246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.161 [2024-10-01 13:44:00.482264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.161 [2024-10-01 13:44:00.482315] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.161 [2024-10-01 13:44:00.482340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.161 [2024-10-01 13:44:00.482357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.161 [2024-10-01 13:44:00.482390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.161 [2024-10-01 13:44:00.482414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.161 [2024-10-01 13:44:00.482441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.161 [2024-10-01 13:44:00.482459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.161 [2024-10-01 13:44:00.482473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.161 [2024-10-01 13:44:00.482490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.161 [2024-10-01 13:44:00.482524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.161 [2024-10-01 13:44:00.482559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.161 [2024-10-01 13:44:00.482827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.161 [2024-10-01 13:44:00.482853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.161 [2024-10-01 13:44:00.492211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.161 [2024-10-01 13:44:00.492287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.161 [2024-10-01 13:44:00.492378] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.161 [2024-10-01 13:44:00.492410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.161 [2024-10-01 13:44:00.492429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.161 [2024-10-01 13:44:00.492497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.161 [2024-10-01 13:44:00.492525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.161 [2024-10-01 13:44:00.492557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.161 [2024-10-01 13:44:00.492578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.161 [2024-10-01 13:44:00.492613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.161 [2024-10-01 13:44:00.492634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.161 [2024-10-01 13:44:00.492648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.161 [2024-10-01 13:44:00.492663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.161 [2024-10-01 13:44:00.492695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.161 [2024-10-01 13:44:00.492715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.161 [2024-10-01 13:44:00.492729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.161 [2024-10-01 13:44:00.492744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.161 [2024-10-01 13:44:00.492773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.161 [2024-10-01 13:44:00.503337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.161 [2024-10-01 13:44:00.503389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.161 [2024-10-01 13:44:00.503486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.161 [2024-10-01 13:44:00.503519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.161 [2024-10-01 13:44:00.503552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.161 [2024-10-01 13:44:00.503609] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.161 [2024-10-01 13:44:00.503635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.161 [2024-10-01 13:44:00.503652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.161 [2024-10-01 13:44:00.503706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.161 [2024-10-01 13:44:00.503732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.161 [2024-10-01 13:44:00.503760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.161 [2024-10-01 13:44:00.503778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.161 [2024-10-01 13:44:00.503792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.161 [2024-10-01 13:44:00.503809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.161 [2024-10-01 13:44:00.503825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.161 [2024-10-01 13:44:00.503838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.161 [2024-10-01 13:44:00.503870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.161 [2024-10-01 13:44:00.503903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.161 [2024-10-01 13:44:00.513695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.161 [2024-10-01 13:44:00.513796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.161 [2024-10-01 13:44:00.513945] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.161 [2024-10-01 13:44:00.513982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.161 [2024-10-01 13:44:00.514001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.161 [2024-10-01 13:44:00.514056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.161 [2024-10-01 13:44:00.514081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.161 [2024-10-01 13:44:00.514098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.161 [2024-10-01 13:44:00.515072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.161 [2024-10-01 13:44:00.515120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.161 [2024-10-01 13:44:00.515319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.161 [2024-10-01 13:44:00.515355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.161 [2024-10-01 13:44:00.515375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.161 [2024-10-01 13:44:00.515396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.161 [2024-10-01 13:44:00.515412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.161 [2024-10-01 13:44:00.515426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.161 [2024-10-01 13:44:00.515559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.162 [2024-10-01 13:44:00.515584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.162 [2024-10-01 13:44:00.524748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.162 [2024-10-01 13:44:00.524828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.162 [2024-10-01 13:44:00.524956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.162 [2024-10-01 13:44:00.525018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.162 [2024-10-01 13:44:00.525040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.162 [2024-10-01 13:44:00.525095] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.162 [2024-10-01 13:44:00.525122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.162 [2024-10-01 13:44:00.525140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.162 [2024-10-01 13:44:00.525178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.162 [2024-10-01 13:44:00.525202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.162 [2024-10-01 13:44:00.525230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.162 [2024-10-01 13:44:00.525248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.162 [2024-10-01 13:44:00.525263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.162 [2024-10-01 13:44:00.525281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.162 [2024-10-01 13:44:00.525296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.162 [2024-10-01 13:44:00.525310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.162 [2024-10-01 13:44:00.525342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.162 [2024-10-01 13:44:00.525362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.162 [2024-10-01 13:44:00.535136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.162 [2024-10-01 13:44:00.535188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.162 [2024-10-01 13:44:00.535286] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.162 [2024-10-01 13:44:00.535319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.162 [2024-10-01 13:44:00.535338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.162 [2024-10-01 13:44:00.535388] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.162 [2024-10-01 13:44:00.535414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.162 [2024-10-01 13:44:00.535430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.162 [2024-10-01 13:44:00.535463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.162 [2024-10-01 13:44:00.535487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.162 [2024-10-01 13:44:00.535515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.162 [2024-10-01 13:44:00.535549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.162 [2024-10-01 13:44:00.535568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.162 [2024-10-01 13:44:00.535585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.162 [2024-10-01 13:44:00.535601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.162 [2024-10-01 13:44:00.535631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.162 [2024-10-01 13:44:00.535667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.162 [2024-10-01 13:44:00.535687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.162 [2024-10-01 13:44:00.546272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.162 [2024-10-01 13:44:00.546326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.162 [2024-10-01 13:44:00.546435] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.162 [2024-10-01 13:44:00.546468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.162 [2024-10-01 13:44:00.546486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.162 [2024-10-01 13:44:00.546551] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.162 [2024-10-01 13:44:00.546579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.162 [2024-10-01 13:44:00.546596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.162 [2024-10-01 13:44:00.546631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.162 [2024-10-01 13:44:00.546654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.162 [2024-10-01 13:44:00.546700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.162 [2024-10-01 13:44:00.546723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.162 [2024-10-01 13:44:00.546737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.162 [2024-10-01 13:44:00.546754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.162 [2024-10-01 13:44:00.546770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.162 [2024-10-01 13:44:00.546785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.162 [2024-10-01 13:44:00.546818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.162 [2024-10-01 13:44:00.546838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.162 [2024-10-01 13:44:00.556723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.162 [2024-10-01 13:44:00.556780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.162 [2024-10-01 13:44:00.556885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.162 [2024-10-01 13:44:00.556924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.162 [2024-10-01 13:44:00.556945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.162 [2024-10-01 13:44:00.556997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.162 [2024-10-01 13:44:00.557024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.162 [2024-10-01 13:44:00.557041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.162 [2024-10-01 13:44:00.557985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.162 [2024-10-01 13:44:00.558058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.162 [2024-10-01 13:44:00.558268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.162 [2024-10-01 13:44:00.558307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.162 [2024-10-01 13:44:00.558325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.162 [2024-10-01 13:44:00.558344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.162 [2024-10-01 13:44:00.558360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.162 [2024-10-01 13:44:00.558374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.162 [2024-10-01 13:44:00.558489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.162 [2024-10-01 13:44:00.558512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.162 [2024-10-01 13:44:00.567904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.162 [2024-10-01 13:44:00.567962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.162 [2024-10-01 13:44:00.568071] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.162 [2024-10-01 13:44:00.568106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.162 [2024-10-01 13:44:00.568125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.162 [2024-10-01 13:44:00.568176] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.162 [2024-10-01 13:44:00.568202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.162 [2024-10-01 13:44:00.568219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.162 [2024-10-01 13:44:00.568253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.162 [2024-10-01 13:44:00.568276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.162 [2024-10-01 13:44:00.568303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.162 [2024-10-01 13:44:00.568321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.162 [2024-10-01 13:44:00.568336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.162 [2024-10-01 13:44:00.568354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.162 [2024-10-01 13:44:00.568369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.162 [2024-10-01 13:44:00.568383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.163 [2024-10-01 13:44:00.568415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.163 [2024-10-01 13:44:00.568436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.163 [2024-10-01 13:44:00.578349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.163 [2024-10-01 13:44:00.578403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.163 [2024-10-01 13:44:00.578504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.163 [2024-10-01 13:44:00.578550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.163 [2024-10-01 13:44:00.578598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.163 [2024-10-01 13:44:00.578656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.163 [2024-10-01 13:44:00.578683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.163 [2024-10-01 13:44:00.578700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.163 [2024-10-01 13:44:00.578734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.163 [2024-10-01 13:44:00.578758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.163 [2024-10-01 13:44:00.578786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.163 [2024-10-01 13:44:00.578804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.163 [2024-10-01 13:44:00.578819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.163 [2024-10-01 13:44:00.578836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.163 [2024-10-01 13:44:00.578852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.163 [2024-10-01 13:44:00.578866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.163 [2024-10-01 13:44:00.578898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.163 [2024-10-01 13:44:00.578918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.163 [2024-10-01 13:44:00.589622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.163 [2024-10-01 13:44:00.589680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.163 [2024-10-01 13:44:00.589794] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.163 [2024-10-01 13:44:00.589826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.163 [2024-10-01 13:44:00.589845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.163 [2024-10-01 13:44:00.589895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.163 [2024-10-01 13:44:00.589921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.163 [2024-10-01 13:44:00.589937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.163 [2024-10-01 13:44:00.589988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.163 [2024-10-01 13:44:00.590016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.163 [2024-10-01 13:44:00.590044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.163 [2024-10-01 13:44:00.590063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.163 [2024-10-01 13:44:00.590078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.163 [2024-10-01 13:44:00.590095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.163 [2024-10-01 13:44:00.590111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.163 [2024-10-01 13:44:00.590124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.163 [2024-10-01 13:44:00.590174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.163 [2024-10-01 13:44:00.590195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.163 [2024-10-01 13:44:00.600176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.163 [2024-10-01 13:44:00.600267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.163 [2024-10-01 13:44:00.600402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.163 [2024-10-01 13:44:00.600438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.163 [2024-10-01 13:44:00.600459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.163 [2024-10-01 13:44:00.600511] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.163 [2024-10-01 13:44:00.600554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.163 [2024-10-01 13:44:00.600576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.163 [2024-10-01 13:44:00.601517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.163 [2024-10-01 13:44:00.601578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.163 [2024-10-01 13:44:00.601780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.163 [2024-10-01 13:44:00.601818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.163 [2024-10-01 13:44:00.601838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.163 [2024-10-01 13:44:00.601857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.163 [2024-10-01 13:44:00.601873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.163 [2024-10-01 13:44:00.601886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.163 [2024-10-01 13:44:00.602035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.163 [2024-10-01 13:44:00.602062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.163 [2024-10-01 13:44:00.611656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.163 [2024-10-01 13:44:00.611717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.163 [2024-10-01 13:44:00.612625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.163 [2024-10-01 13:44:00.612680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.163 [2024-10-01 13:44:00.612704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.163 [2024-10-01 13:44:00.612761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.163 [2024-10-01 13:44:00.612788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.163 [2024-10-01 13:44:00.612819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.163 [2024-10-01 13:44:00.612969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.163 [2024-10-01 13:44:00.613022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.163 [2024-10-01 13:44:00.613349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.163 [2024-10-01 13:44:00.613391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.163 [2024-10-01 13:44:00.613410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.163 [2024-10-01 13:44:00.613430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.163 [2024-10-01 13:44:00.613446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.163 [2024-10-01 13:44:00.613459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.163 [2024-10-01 13:44:00.613630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.163 [2024-10-01 13:44:00.613658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.163 [2024-10-01 13:44:00.622836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.163 [2024-10-01 13:44:00.622890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.163 [2024-10-01 13:44:00.622994] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.163 [2024-10-01 13:44:00.623042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.163 [2024-10-01 13:44:00.623078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.163 [2024-10-01 13:44:00.623137] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.163 [2024-10-01 13:44:00.623163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.163 [2024-10-01 13:44:00.623180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.163 [2024-10-01 13:44:00.623216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.163 [2024-10-01 13:44:00.623240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.163 [2024-10-01 13:44:00.623267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.163 [2024-10-01 13:44:00.623285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.163 [2024-10-01 13:44:00.623300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.163 [2024-10-01 13:44:00.623318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.163 [2024-10-01 13:44:00.623333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.163 [2024-10-01 13:44:00.623347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.163 [2024-10-01 13:44:00.623379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.164 [2024-10-01 13:44:00.623399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.164 [2024-10-01 13:44:00.634198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.164 [2024-10-01 13:44:00.634273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.164 [2024-10-01 13:44:00.634393] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.164 [2024-10-01 13:44:00.634430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.164 [2024-10-01 13:44:00.634450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.164 [2024-10-01 13:44:00.634550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.164 [2024-10-01 13:44:00.634580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.164 [2024-10-01 13:44:00.634597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.164 [2024-10-01 13:44:00.634635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.164 [2024-10-01 13:44:00.634660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.164 [2024-10-01 13:44:00.634687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.164 [2024-10-01 13:44:00.634706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.164 [2024-10-01 13:44:00.634721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.164 [2024-10-01 13:44:00.634739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.164 [2024-10-01 13:44:00.634756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.164 [2024-10-01 13:44:00.634770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.164 [2024-10-01 13:44:00.634802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.164 [2024-10-01 13:44:00.634829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.164 [2024-10-01 13:44:00.644481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.164 [2024-10-01 13:44:00.644569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.164 [2024-10-01 13:44:00.644712] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.164 [2024-10-01 13:44:00.644748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.164 [2024-10-01 13:44:00.644767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.164 [2024-10-01 13:44:00.644827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.164 [2024-10-01 13:44:00.644853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.164 [2024-10-01 13:44:00.644870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.164 [2024-10-01 13:44:00.645807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.164 [2024-10-01 13:44:00.645853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.164 [2024-10-01 13:44:00.646059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.164 [2024-10-01 13:44:00.646096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.164 [2024-10-01 13:44:00.646115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.164 [2024-10-01 13:44:00.646134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.164 [2024-10-01 13:44:00.646151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.164 [2024-10-01 13:44:00.646165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.164 [2024-10-01 13:44:00.646280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.164 [2024-10-01 13:44:00.646326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.164 [2024-10-01 13:44:00.655323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.164 [2024-10-01 13:44:00.655376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.164 [2024-10-01 13:44:00.655479] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.164 [2024-10-01 13:44:00.655511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.164 [2024-10-01 13:44:00.655530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.164 [2024-10-01 13:44:00.655601] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.164 [2024-10-01 13:44:00.655628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.164 [2024-10-01 13:44:00.655645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.164 [2024-10-01 13:44:00.655679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.164 [2024-10-01 13:44:00.655702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.164 [2024-10-01 13:44:00.655730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.164 [2024-10-01 13:44:00.655747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.164 [2024-10-01 13:44:00.655762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.164 [2024-10-01 13:44:00.655779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.164 [2024-10-01 13:44:00.655794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.164 [2024-10-01 13:44:00.655808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.164 [2024-10-01 13:44:00.656087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.164 [2024-10-01 13:44:00.656116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.164 8504.50 IOPS, 33.22 MiB/s [2024-10-01 13:44:00.668092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.164 [2024-10-01 13:44:00.668144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.164 [2024-10-01 13:44:00.669212] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.164 [2024-10-01 13:44:00.669259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.164 [2024-10-01 13:44:00.669281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.164 [2024-10-01 13:44:00.669334] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.164 [2024-10-01 13:44:00.669359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.164 [2024-10-01 13:44:00.669375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.164 [2024-10-01 13:44:00.670218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.164 [2024-10-01 13:44:00.670265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.164 [2024-10-01 13:44:00.670448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.164 [2024-10-01 13:44:00.670504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.164 [2024-10-01 13:44:00.670524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.164 [2024-10-01 13:44:00.670558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.164 [2024-10-01 13:44:00.670577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.164 [2024-10-01 13:44:00.670590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.164 [2024-10-01 13:44:00.670705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.164 [2024-10-01 13:44:00.670728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.164 [2024-10-01 13:44:00.679445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.164 [2024-10-01 13:44:00.679497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.164 [2024-10-01 13:44:00.679628] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.164 [2024-10-01 13:44:00.679661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.164 [2024-10-01 13:44:00.679680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.164 [2024-10-01 13:44:00.679730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.164 [2024-10-01 13:44:00.679756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.164 [2024-10-01 13:44:00.679773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.164 [2024-10-01 13:44:00.679807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.164 [2024-10-01 13:44:00.679830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.164 [2024-10-01 13:44:00.679857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.164 [2024-10-01 13:44:00.679886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.164 [2024-10-01 13:44:00.679902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.164 [2024-10-01 13:44:00.679920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.164 [2024-10-01 13:44:00.679936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.164 [2024-10-01 13:44:00.679950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.164 [2024-10-01 13:44:00.679983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.164 [2024-10-01 13:44:00.680003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.164 [2024-10-01 13:44:00.690581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.164 [2024-10-01 13:44:00.690635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.164 [2024-10-01 13:44:00.690735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.164 [2024-10-01 13:44:00.690767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.164 [2024-10-01 13:44:00.690786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.164 [2024-10-01 13:44:00.690835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.164 [2024-10-01 13:44:00.690883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.164 [2024-10-01 13:44:00.690903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.165 [2024-10-01 13:44:00.690936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.165 [2024-10-01 13:44:00.690960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.165 [2024-10-01 13:44:00.690987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.165 [2024-10-01 13:44:00.691005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.165 [2024-10-01 13:44:00.691020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.165 [2024-10-01 13:44:00.691037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.165 [2024-10-01 13:44:00.691052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.165 [2024-10-01 13:44:00.691066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.165 [2024-10-01 13:44:00.691097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.165 [2024-10-01 13:44:00.691118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.165 [2024-10-01 13:44:00.700724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.165 [2024-10-01 13:44:00.700801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.165 [2024-10-01 13:44:00.700885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.165 [2024-10-01 13:44:00.700916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.165 [2024-10-01 13:44:00.700935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.165 [2024-10-01 13:44:00.701922] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.165 [2024-10-01 13:44:00.701967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.165 [2024-10-01 13:44:00.701989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.165 [2024-10-01 13:44:00.702009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.165 [2024-10-01 13:44:00.702200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.165 [2024-10-01 13:44:00.702240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.165 [2024-10-01 13:44:00.702259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.165 [2024-10-01 13:44:00.702273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.165 [2024-10-01 13:44:00.703566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.165 [2024-10-01 13:44:00.703605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.165 [2024-10-01 13:44:00.703624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.165 [2024-10-01 13:44:00.703639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.165 [2024-10-01 13:44:00.704521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.165 [2024-10-01 13:44:00.711523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.165 [2024-10-01 13:44:00.711591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.165 [2024-10-01 13:44:00.711694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.165 [2024-10-01 13:44:00.711727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.165 [2024-10-01 13:44:00.711746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.165 [2024-10-01 13:44:00.711797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.165 [2024-10-01 13:44:00.711823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.165 [2024-10-01 13:44:00.711839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.165 [2024-10-01 13:44:00.711884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.165 [2024-10-01 13:44:00.711910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.165 [2024-10-01 13:44:00.711939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.165 [2024-10-01 13:44:00.711957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.165 [2024-10-01 13:44:00.711972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.165 [2024-10-01 13:44:00.711989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.165 [2024-10-01 13:44:00.712005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.165 [2024-10-01 13:44:00.712019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.165 [2024-10-01 13:44:00.712292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.165 [2024-10-01 13:44:00.712319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.165 [2024-10-01 13:44:00.721721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.165 [2024-10-01 13:44:00.721776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.165 [2024-10-01 13:44:00.721878] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.165 [2024-10-01 13:44:00.721910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.165 [2024-10-01 13:44:00.721929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.165 [2024-10-01 13:44:00.721980] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.165 [2024-10-01 13:44:00.722005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.165 [2024-10-01 13:44:00.722022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.165 [2024-10-01 13:44:00.722056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.165 [2024-10-01 13:44:00.722081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.165 [2024-10-01 13:44:00.722107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.165 [2024-10-01 13:44:00.722126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.165 [2024-10-01 13:44:00.722162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.165 [2024-10-01 13:44:00.722181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.165 [2024-10-01 13:44:00.722197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.165 [2024-10-01 13:44:00.722210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.165 [2024-10-01 13:44:00.722243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.165 [2024-10-01 13:44:00.722263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.165 [2024-10-01 13:44:00.732749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.165 [2024-10-01 13:44:00.732806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.165 [2024-10-01 13:44:00.732910] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.165 [2024-10-01 13:44:00.732943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.165 [2024-10-01 13:44:00.732961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.165 [2024-10-01 13:44:00.733012] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.165 [2024-10-01 13:44:00.733037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.165 [2024-10-01 13:44:00.733054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.165 [2024-10-01 13:44:00.733088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.165 [2024-10-01 13:44:00.733112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.165 [2024-10-01 13:44:00.733140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.165 [2024-10-01 13:44:00.733157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.165 [2024-10-01 13:44:00.733172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.165 [2024-10-01 13:44:00.733189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.165 [2024-10-01 13:44:00.733205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.165 [2024-10-01 13:44:00.733218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.165 [2024-10-01 13:44:00.733250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.165 [2024-10-01 13:44:00.733270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.165 [2024-10-01 13:44:00.742898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.165 [2024-10-01 13:44:00.742984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.165 [2024-10-01 13:44:00.743073] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.165 [2024-10-01 13:44:00.743104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.165 [2024-10-01 13:44:00.743123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.165 [2024-10-01 13:44:00.744127] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.165 [2024-10-01 13:44:00.744173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.165 [2024-10-01 13:44:00.744218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.165 [2024-10-01 13:44:00.744240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.165 [2024-10-01 13:44:00.744442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.165 [2024-10-01 13:44:00.744483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.165 [2024-10-01 13:44:00.744501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.165 [2024-10-01 13:44:00.744517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.165 [2024-10-01 13:44:00.744647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.165 [2024-10-01 13:44:00.744672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.165 [2024-10-01 13:44:00.744687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.165 [2024-10-01 13:44:00.744701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.165 [2024-10-01 13:44:00.745952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.165 [2024-10-01 13:44:00.753856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.166 [2024-10-01 13:44:00.753930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.166 [2024-10-01 13:44:00.754056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.166 [2024-10-01 13:44:00.754091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.166 [2024-10-01 13:44:00.754111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.166 [2024-10-01 13:44:00.754161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.166 [2024-10-01 13:44:00.754187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.166 [2024-10-01 13:44:00.754204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.166 [2024-10-01 13:44:00.754239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.166 [2024-10-01 13:44:00.754264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.166 [2024-10-01 13:44:00.754291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.166 [2024-10-01 13:44:00.754309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.166 [2024-10-01 13:44:00.754325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.166 [2024-10-01 13:44:00.754343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.166 [2024-10-01 13:44:00.754359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.166 [2024-10-01 13:44:00.754373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.166 [2024-10-01 13:44:00.754406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.166 [2024-10-01 13:44:00.754427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.166 [2024-10-01 13:44:00.765156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.166 [2024-10-01 13:44:00.765276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.166 [2024-10-01 13:44:00.765417] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.166 [2024-10-01 13:44:00.765454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.166 [2024-10-01 13:44:00.765473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.166 [2024-10-01 13:44:00.765526] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.166 [2024-10-01 13:44:00.765569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.166 [2024-10-01 13:44:00.765587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.166 [2024-10-01 13:44:00.765624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.166 [2024-10-01 13:44:00.765649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.166 [2024-10-01 13:44:00.765676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.166 [2024-10-01 13:44:00.765695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.166 [2024-10-01 13:44:00.765711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.166 [2024-10-01 13:44:00.765730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.166 [2024-10-01 13:44:00.765745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.166 [2024-10-01 13:44:00.765759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.166 [2024-10-01 13:44:00.765792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.166 [2024-10-01 13:44:00.765812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.166 [2024-10-01 13:44:00.776936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.166 [2024-10-01 13:44:00.777003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.166 [2024-10-01 13:44:00.777138] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.166 [2024-10-01 13:44:00.777174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.166 [2024-10-01 13:44:00.777193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.166 [2024-10-01 13:44:00.777246] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.166 [2024-10-01 13:44:00.777271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.166 [2024-10-01 13:44:00.777288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.166 [2024-10-01 13:44:00.777340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.166 [2024-10-01 13:44:00.777369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.166 [2024-10-01 13:44:00.777398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.166 [2024-10-01 13:44:00.777416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.166 [2024-10-01 13:44:00.777431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.166 [2024-10-01 13:44:00.777466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.166 [2024-10-01 13:44:00.777485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.166 [2024-10-01 13:44:00.777499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.166 [2024-10-01 13:44:00.777548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.166 [2024-10-01 13:44:00.777581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.166 [2024-10-01 13:44:00.788115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.166 [2024-10-01 13:44:00.788180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.166 [2024-10-01 13:44:00.789214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.166 [2024-10-01 13:44:00.789260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.166 [2024-10-01 13:44:00.789282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.166 [2024-10-01 13:44:00.789337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.166 [2024-10-01 13:44:00.789363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.166 [2024-10-01 13:44:00.789380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.166 [2024-10-01 13:44:00.789638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.166 [2024-10-01 13:44:00.789675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.166 [2024-10-01 13:44:00.789785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.166 [2024-10-01 13:44:00.789807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.166 [2024-10-01 13:44:00.789823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.166 [2024-10-01 13:44:00.789841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.166 [2024-10-01 13:44:00.789857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.166 [2024-10-01 13:44:00.789871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.166 [2024-10-01 13:44:00.791230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.166 [2024-10-01 13:44:00.791268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.166 [2024-10-01 13:44:00.798943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.166 [2024-10-01 13:44:00.799001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.166 [2024-10-01 13:44:00.799108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.166 [2024-10-01 13:44:00.799142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.166 [2024-10-01 13:44:00.799161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.166 [2024-10-01 13:44:00.799214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.166 [2024-10-01 13:44:00.799240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.166 [2024-10-01 13:44:00.799257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.166 [2024-10-01 13:44:00.799317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.166 [2024-10-01 13:44:00.799342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.166 [2024-10-01 13:44:00.799370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.166 [2024-10-01 13:44:00.799387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.166 [2024-10-01 13:44:00.799402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.166 [2024-10-01 13:44:00.799420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.166 [2024-10-01 13:44:00.799435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.166 [2024-10-01 13:44:00.799449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.166 [2024-10-01 13:44:00.799737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.166 [2024-10-01 13:44:00.799766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.166 [2024-10-01 13:44:00.809083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.166 [2024-10-01 13:44:00.809167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.166 [2024-10-01 13:44:00.809266] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.166 [2024-10-01 13:44:00.809297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.166 [2024-10-01 13:44:00.809316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.166 [2024-10-01 13:44:00.809385] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.166 [2024-10-01 13:44:00.809413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.166 [2024-10-01 13:44:00.809431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.166 [2024-10-01 13:44:00.809450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.166 [2024-10-01 13:44:00.809483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.166 [2024-10-01 13:44:00.809504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.167 [2024-10-01 13:44:00.809519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.167 [2024-10-01 13:44:00.809548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.167 [2024-10-01 13:44:00.809585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.167 [2024-10-01 13:44:00.809607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.167 [2024-10-01 13:44:00.809622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.167 [2024-10-01 13:44:00.809637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.167 [2024-10-01 13:44:00.809667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.167 [2024-10-01 13:44:00.820241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.167 [2024-10-01 13:44:00.820301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.167 [2024-10-01 13:44:00.820433] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.167 [2024-10-01 13:44:00.820477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.167 [2024-10-01 13:44:00.820498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.167 [2024-10-01 13:44:00.820565] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.167 [2024-10-01 13:44:00.820593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.167 [2024-10-01 13:44:00.820610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.167 [2024-10-01 13:44:00.820645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.167 [2024-10-01 13:44:00.820669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.167 [2024-10-01 13:44:00.820696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.167 [2024-10-01 13:44:00.820715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.167 [2024-10-01 13:44:00.820729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.167 [2024-10-01 13:44:00.820746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.167 [2024-10-01 13:44:00.820762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.167 [2024-10-01 13:44:00.820776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.167 [2024-10-01 13:44:00.820808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.167 [2024-10-01 13:44:00.820828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.167 [2024-10-01 13:44:00.830573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.167 [2024-10-01 13:44:00.830628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.167 [2024-10-01 13:44:00.830733] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.167 [2024-10-01 13:44:00.830765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.167 [2024-10-01 13:44:00.830784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.167 [2024-10-01 13:44:00.830835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.167 [2024-10-01 13:44:00.830861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.167 [2024-10-01 13:44:00.830878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.167 [2024-10-01 13:44:00.831815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.167 [2024-10-01 13:44:00.831860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.167 [2024-10-01 13:44:00.832084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.167 [2024-10-01 13:44:00.832123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.167 [2024-10-01 13:44:00.832142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.167 [2024-10-01 13:44:00.832160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.167 [2024-10-01 13:44:00.832176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.167 [2024-10-01 13:44:00.832213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.167 [2024-10-01 13:44:00.832331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.167 [2024-10-01 13:44:00.832354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.167 [2024-10-01 13:44:00.841489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.167 [2024-10-01 13:44:00.841569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.167 [2024-10-01 13:44:00.841686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.167 [2024-10-01 13:44:00.841721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.167 [2024-10-01 13:44:00.841748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.167 [2024-10-01 13:44:00.841836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.167 [2024-10-01 13:44:00.841895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.167 [2024-10-01 13:44:00.841930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.167 [2024-10-01 13:44:00.842224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.167 [2024-10-01 13:44:00.842270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.167 [2024-10-01 13:44:00.842420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.167 [2024-10-01 13:44:00.842457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.167 [2024-10-01 13:44:00.842475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.167 [2024-10-01 13:44:00.842495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.167 [2024-10-01 13:44:00.842511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.167 [2024-10-01 13:44:00.842525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.167 [2024-10-01 13:44:00.842656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.167 [2024-10-01 13:44:00.842681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.167 [2024-10-01 13:44:00.851662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.167 [2024-10-01 13:44:00.851769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.167 [2024-10-01 13:44:00.851871] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.167 [2024-10-01 13:44:00.851919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.167 [2024-10-01 13:44:00.851938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.167 [2024-10-01 13:44:00.852011] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.167 [2024-10-01 13:44:00.852040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.167 [2024-10-01 13:44:00.852057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.167 [2024-10-01 13:44:00.852077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.167 [2024-10-01 13:44:00.852159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.167 [2024-10-01 13:44:00.852187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.167 [2024-10-01 13:44:00.852203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.167 [2024-10-01 13:44:00.852219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.167 [2024-10-01 13:44:00.852253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.167 [2024-10-01 13:44:00.852274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.167 [2024-10-01 13:44:00.852288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.167 [2024-10-01 13:44:00.852302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.167 [2024-10-01 13:44:00.852332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.167 [2024-10-01 13:44:00.864371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.167 [2024-10-01 13:44:00.864455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.167 [2024-10-01 13:44:00.864602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.167 [2024-10-01 13:44:00.864639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.167 [2024-10-01 13:44:00.864659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.167 [2024-10-01 13:44:00.864713] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.167 [2024-10-01 13:44:00.864739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.167 [2024-10-01 13:44:00.864756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.167 [2024-10-01 13:44:00.864792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.167 [2024-10-01 13:44:00.864816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.168 [2024-10-01 13:44:00.864844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.168 [2024-10-01 13:44:00.864863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.168 [2024-10-01 13:44:00.864879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.168 [2024-10-01 13:44:00.864897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.168 [2024-10-01 13:44:00.864913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.168 [2024-10-01 13:44:00.864927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.168 [2024-10-01 13:44:00.864959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.168 [2024-10-01 13:44:00.864979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.168 [2024-10-01 13:44:00.875520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.168 [2024-10-01 13:44:00.875633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.168 [2024-10-01 13:44:00.875775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.168 [2024-10-01 13:44:00.875812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.168 [2024-10-01 13:44:00.875867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.168 [2024-10-01 13:44:00.875948] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.168 [2024-10-01 13:44:00.875975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.168 [2024-10-01 13:44:00.875992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.168 [2024-10-01 13:44:00.876956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.168 [2024-10-01 13:44:00.877006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.168 [2024-10-01 13:44:00.877215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.168 [2024-10-01 13:44:00.877255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.168 [2024-10-01 13:44:00.877275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.168 [2024-10-01 13:44:00.877294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.168 [2024-10-01 13:44:00.877310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.168 [2024-10-01 13:44:00.877323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.168 [2024-10-01 13:44:00.877465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.168 [2024-10-01 13:44:00.877491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.168 [2024-10-01 13:44:00.887337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.168 [2024-10-01 13:44:00.887402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.168 [2024-10-01 13:44:00.887583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.168 [2024-10-01 13:44:00.887629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.168 [2024-10-01 13:44:00.887651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.168 [2024-10-01 13:44:00.887705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.168 [2024-10-01 13:44:00.887731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.168 [2024-10-01 13:44:00.887748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.168 [2024-10-01 13:44:00.887784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.168 [2024-10-01 13:44:00.887808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.168 [2024-10-01 13:44:00.887835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.168 [2024-10-01 13:44:00.887853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.168 [2024-10-01 13:44:00.887869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.168 [2024-10-01 13:44:00.887899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.168 [2024-10-01 13:44:00.887915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.168 [2024-10-01 13:44:00.887949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.168 [2024-10-01 13:44:00.888225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.168 [2024-10-01 13:44:00.888262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.168 [2024-10-01 13:44:00.897744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.168 [2024-10-01 13:44:00.897802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.168 [2024-10-01 13:44:00.897972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.168 [2024-10-01 13:44:00.898006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.168 [2024-10-01 13:44:00.898026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.168 [2024-10-01 13:44:00.898090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.168 [2024-10-01 13:44:00.898134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.168 [2024-10-01 13:44:00.898154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.168 [2024-10-01 13:44:00.898170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.168 [2024-10-01 13:44:00.898203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.168 [2024-10-01 13:44:00.909279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.168 [2024-10-01 13:44:00.909434] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.168 [2024-10-01 13:44:00.909469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.168 [2024-10-01 13:44:00.909488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.168 [2024-10-01 13:44:00.909562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.168 [2024-10-01 13:44:00.909607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.168 [2024-10-01 13:44:00.909626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.168 [2024-10-01 13:44:00.909641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.168 [2024-10-01 13:44:00.909694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.168 [2024-10-01 13:44:00.912511] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:17.168 [2024-10-01 13:44:00.922263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.168 [2024-10-01 13:44:00.922614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.168 [2024-10-01 13:44:00.922661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.168 [2024-10-01 13:44:00.922684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.168 [2024-10-01 13:44:00.922804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.168 [2024-10-01 13:44:00.922899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.168 [2024-10-01 13:44:00.922934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.168 [2024-10-01 13:44:00.922952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.168 [2024-10-01 13:44:00.923015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.168 [2024-10-01 13:44:00.933047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.168 [2024-10-01 13:44:00.933178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.168 [2024-10-01 13:44:00.933223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.168 [2024-10-01 13:44:00.933244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.168 [2024-10-01 13:44:00.933283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.168 [2024-10-01 13:44:00.933321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.168 [2024-10-01 13:44:00.933339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.168 [2024-10-01 13:44:00.933354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.168 [2024-10-01 13:44:00.933391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.168 [2024-10-01 13:44:00.943718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.168 [2024-10-01 13:44:00.943864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.168 [2024-10-01 13:44:00.943920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.168 [2024-10-01 13:44:00.943942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.168 [2024-10-01 13:44:00.943982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.168 [2024-10-01 13:44:00.944020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.168 [2024-10-01 13:44:00.944038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.168 [2024-10-01 13:44:00.944053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.168 [2024-10-01 13:44:00.944091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.168 [2024-10-01 13:44:00.956575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.168 [2024-10-01 13:44:00.957395] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.168 [2024-10-01 13:44:00.957449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.168 [2024-10-01 13:44:00.957472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.168 [2024-10-01 13:44:00.957818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.168 [2024-10-01 13:44:00.958018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.168 [2024-10-01 13:44:00.958056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.168 [2024-10-01 13:44:00.958076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.169 [2024-10-01 13:44:00.958225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.169 [2024-10-01 13:44:00.967931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.169 [2024-10-01 13:44:00.968244] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.169 [2024-10-01 13:44:00.968294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.169 [2024-10-01 13:44:00.968347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.169 [2024-10-01 13:44:00.968436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.169 [2024-10-01 13:44:00.968506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.169 [2024-10-01 13:44:00.968547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.169 [2024-10-01 13:44:00.968567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.169 [2024-10-01 13:44:00.969680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.169 [2024-10-01 13:44:00.978166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.169 [2024-10-01 13:44:00.978297] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.169 [2024-10-01 13:44:00.978337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.169 [2024-10-01 13:44:00.978356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.169 [2024-10-01 13:44:00.978395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.169 [2024-10-01 13:44:00.978433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.169 [2024-10-01 13:44:00.978451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.169 [2024-10-01 13:44:00.978466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.169 [2024-10-01 13:44:00.978503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.169 [2024-10-01 13:44:00.988275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.169 [2024-10-01 13:44:00.988400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.169 [2024-10-01 13:44:00.988435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.169 [2024-10-01 13:44:00.988454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.169 [2024-10-01 13:44:00.989796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.169 [2024-10-01 13:44:00.990783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.169 [2024-10-01 13:44:00.990825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.169 [2024-10-01 13:44:00.990844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.169 [2024-10-01 13:44:00.990971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.169 [2024-10-01 13:44:00.999413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.169 [2024-10-01 13:44:01.000702] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.169 [2024-10-01 13:44:01.000754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.169 [2024-10-01 13:44:01.000778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.169 [2024-10-01 13:44:01.001411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.169 [2024-10-01 13:44:01.001531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.169 [2024-10-01 13:44:01.001609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.169 [2024-10-01 13:44:01.001629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.169 [2024-10-01 13:44:01.001673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.169 [2024-10-01 13:44:01.009589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.169 [2024-10-01 13:44:01.009791] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.169 [2024-10-01 13:44:01.009828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.169 [2024-10-01 13:44:01.009847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.169 [2024-10-01 13:44:01.009889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.169 [2024-10-01 13:44:01.009928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.169 [2024-10-01 13:44:01.009946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.169 [2024-10-01 13:44:01.009962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.169 [2024-10-01 13:44:01.010000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.169 [2024-10-01 13:44:01.020431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.169 [2024-10-01 13:44:01.020642] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.169 [2024-10-01 13:44:01.020680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.169 [2024-10-01 13:44:01.020700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.169 [2024-10-01 13:44:01.020742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.169 [2024-10-01 13:44:01.020781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.169 [2024-10-01 13:44:01.020800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.169 [2024-10-01 13:44:01.020816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.169 [2024-10-01 13:44:01.020854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.169 [2024-10-01 13:44:01.030751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.169 [2024-10-01 13:44:01.030951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.169 [2024-10-01 13:44:01.030988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.169 [2024-10-01 13:44:01.031009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.169 [2024-10-01 13:44:01.031051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.169 [2024-10-01 13:44:01.031089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.169 [2024-10-01 13:44:01.031108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.169 [2024-10-01 13:44:01.031124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.169 [2024-10-01 13:44:01.031400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.169 [2024-10-01 13:44:01.040985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.169 [2024-10-01 13:44:01.041933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.169 [2024-10-01 13:44:01.041982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.169 [2024-10-01 13:44:01.042005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.169 [2024-10-01 13:44:01.042195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.169 [2024-10-01 13:44:01.042297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.169 [2024-10-01 13:44:01.042320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.169 [2024-10-01 13:44:01.042337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.169 [2024-10-01 13:44:01.042376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.169 [2024-10-01 13:44:01.051734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.169 [2024-10-01 13:44:01.051978] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.169 [2024-10-01 13:44:01.052017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.169 [2024-10-01 13:44:01.052037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.169 [2024-10-01 13:44:01.052079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.169 [2024-10-01 13:44:01.052118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.169 [2024-10-01 13:44:01.052137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.169 [2024-10-01 13:44:01.052153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.169 [2024-10-01 13:44:01.053126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.169 [2024-10-01 13:44:01.061920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.169 [2024-10-01 13:44:01.062120] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.169 [2024-10-01 13:44:01.062156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.169 [2024-10-01 13:44:01.062175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.169 [2024-10-01 13:44:01.062216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.169 [2024-10-01 13:44:01.062279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.169 [2024-10-01 13:44:01.062304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.169 [2024-10-01 13:44:01.062321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.169 [2024-10-01 13:44:01.062359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.169 [2024-10-01 13:44:01.073327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.169 [2024-10-01 13:44:01.073528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.169 [2024-10-01 13:44:01.073579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.169 [2024-10-01 13:44:01.073600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.169 [2024-10-01 13:44:01.074749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.169 [2024-10-01 13:44:01.075427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.169 [2024-10-01 13:44:01.075467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.169 [2024-10-01 13:44:01.075487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.169 [2024-10-01 13:44:01.075586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.169 [2024-10-01 13:44:01.083489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.169 [2024-10-01 13:44:01.083668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.169 [2024-10-01 13:44:01.083704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.169 [2024-10-01 13:44:01.083724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.169 [2024-10-01 13:44:01.083763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.170 [2024-10-01 13:44:01.083801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.170 [2024-10-01 13:44:01.083820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.170 [2024-10-01 13:44:01.083835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.170 [2024-10-01 13:44:01.085072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.170 [2024-10-01 13:44:01.094101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.170 [2024-10-01 13:44:01.094289] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.170 [2024-10-01 13:44:01.094326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.170 [2024-10-01 13:44:01.094345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.170 [2024-10-01 13:44:01.094388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.170 [2024-10-01 13:44:01.094427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.170 [2024-10-01 13:44:01.094445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.170 [2024-10-01 13:44:01.094461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.170 [2024-10-01 13:44:01.094498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.170 [2024-10-01 13:44:01.104479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.170 [2024-10-01 13:44:01.104689] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.170 [2024-10-01 13:44:01.104726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.170 [2024-10-01 13:44:01.104747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.170 [2024-10-01 13:44:01.104789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.170 [2024-10-01 13:44:01.104827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.170 [2024-10-01 13:44:01.104846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.170 [2024-10-01 13:44:01.104893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.170 [2024-10-01 13:44:01.105172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.170 [2024-10-01 13:44:01.115364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.170 [2024-10-01 13:44:01.115584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.170 [2024-10-01 13:44:01.115622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.170 [2024-10-01 13:44:01.115641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.170 [2024-10-01 13:44:01.115685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.170 [2024-10-01 13:44:01.115723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.170 [2024-10-01 13:44:01.115741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.170 [2024-10-01 13:44:01.115757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.170 [2024-10-01 13:44:01.116902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.170 [2024-10-01 13:44:01.126495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.170 [2024-10-01 13:44:01.126723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.170 [2024-10-01 13:44:01.126761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.170 [2024-10-01 13:44:01.126780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.170 [2024-10-01 13:44:01.126823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.170 [2024-10-01 13:44:01.126861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.170 [2024-10-01 13:44:01.126880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.170 [2024-10-01 13:44:01.126897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.170 [2024-10-01 13:44:01.126934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.170 [2024-10-01 13:44:01.137762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.170 [2024-10-01 13:44:01.137983] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.170 [2024-10-01 13:44:01.138020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.170 [2024-10-01 13:44:01.138040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.170 [2024-10-01 13:44:01.138083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.170 [2024-10-01 13:44:01.138121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.170 [2024-10-01 13:44:01.138139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.170 [2024-10-01 13:44:01.138156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.170 [2024-10-01 13:44:01.138194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.170 [2024-10-01 13:44:01.148183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.170 [2024-10-01 13:44:01.148380] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.170 [2024-10-01 13:44:01.148448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.170 [2024-10-01 13:44:01.148471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.170 [2024-10-01 13:44:01.148513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.170 [2024-10-01 13:44:01.148807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.170 [2024-10-01 13:44:01.148847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.170 [2024-10-01 13:44:01.148867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.170 [2024-10-01 13:44:01.149023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.170 [2024-10-01 13:44:01.159177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.170 [2024-10-01 13:44:01.159367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.170 [2024-10-01 13:44:01.159404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.170 [2024-10-01 13:44:01.159424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.170 [2024-10-01 13:44:01.159465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.170 [2024-10-01 13:44:01.159502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.170 [2024-10-01 13:44:01.159520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.170 [2024-10-01 13:44:01.159553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.170 [2024-10-01 13:44:01.160682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.170 [2024-10-01 13:44:01.170392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.170 [2024-10-01 13:44:01.170604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.170 [2024-10-01 13:44:01.170640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.170 [2024-10-01 13:44:01.170661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.170 [2024-10-01 13:44:01.170703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.170 [2024-10-01 13:44:01.170741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.170 [2024-10-01 13:44:01.170760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.170 [2024-10-01 13:44:01.170776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.170 [2024-10-01 13:44:01.170814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.170 [2024-10-01 13:44:01.181694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.170 [2024-10-01 13:44:01.181885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.170 [2024-10-01 13:44:01.181922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.170 [2024-10-01 13:44:01.181942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.170 [2024-10-01 13:44:01.181983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.170 [2024-10-01 13:44:01.182053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.170 [2024-10-01 13:44:01.182073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.170 [2024-10-01 13:44:01.182088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.170 [2024-10-01 13:44:01.182127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.170 [2024-10-01 13:44:01.191915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.170 [2024-10-01 13:44:01.192059] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.170 [2024-10-01 13:44:01.192094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.170 [2024-10-01 13:44:01.192113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.170 [2024-10-01 13:44:01.192152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.170 [2024-10-01 13:44:01.192189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.170 [2024-10-01 13:44:01.192208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.170 [2024-10-01 13:44:01.192223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.170 [2024-10-01 13:44:01.192259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.170 [2024-10-01 13:44:01.202927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.170 [2024-10-01 13:44:01.203125] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.170 [2024-10-01 13:44:01.203161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.170 [2024-10-01 13:44:01.203181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.170 [2024-10-01 13:44:01.203223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.170 [2024-10-01 13:44:01.203262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.170 [2024-10-01 13:44:01.203280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.170 [2024-10-01 13:44:01.203296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.170 [2024-10-01 13:44:01.203334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.170 [2024-10-01 13:44:01.214155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.170 [2024-10-01 13:44:01.214358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.170 [2024-10-01 13:44:01.214396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.170 [2024-10-01 13:44:01.214417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.170 [2024-10-01 13:44:01.214458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.171 [2024-10-01 13:44:01.214496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.171 [2024-10-01 13:44:01.214516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.171 [2024-10-01 13:44:01.214531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.171 [2024-10-01 13:44:01.214617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.171 [2024-10-01 13:44:01.225201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.171 [2024-10-01 13:44:01.225355] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.171 [2024-10-01 13:44:01.225391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.171 [2024-10-01 13:44:01.225410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.171 [2024-10-01 13:44:01.225450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.171 [2024-10-01 13:44:01.225506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.171 [2024-10-01 13:44:01.225529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.171 [2024-10-01 13:44:01.225565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.171 [2024-10-01 13:44:01.225605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.171 [2024-10-01 13:44:01.235622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.171 [2024-10-01 13:44:01.235831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.171 [2024-10-01 13:44:01.235867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.171 [2024-10-01 13:44:01.235905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.171 [2024-10-01 13:44:01.235948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.171 [2024-10-01 13:44:01.235986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.171 [2024-10-01 13:44:01.236005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.171 [2024-10-01 13:44:01.236031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.171 [2024-10-01 13:44:01.236069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.171 [2024-10-01 13:44:01.246838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.171 [2024-10-01 13:44:01.247039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.171 [2024-10-01 13:44:01.247076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.171 [2024-10-01 13:44:01.247096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.171 [2024-10-01 13:44:01.247144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.171 [2024-10-01 13:44:01.248310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.171 [2024-10-01 13:44:01.248355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.171 [2024-10-01 13:44:01.248376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.171 [2024-10-01 13:44:01.248617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.171 [2024-10-01 13:44:01.257781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.171 [2024-10-01 13:44:01.257913] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.171 [2024-10-01 13:44:01.257948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.171 [2024-10-01 13:44:01.257999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.171 [2024-10-01 13:44:01.258041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.171 [2024-10-01 13:44:01.258079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.171 [2024-10-01 13:44:01.258097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.171 [2024-10-01 13:44:01.258113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.171 [2024-10-01 13:44:01.258165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.171 [2024-10-01 13:44:01.269600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.171 [2024-10-01 13:44:01.269895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.171 [2024-10-01 13:44:01.269947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.171 [2024-10-01 13:44:01.269978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.171 [2024-10-01 13:44:01.270941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.171 [2024-10-01 13:44:01.271228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.171 [2024-10-01 13:44:01.271286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.171 [2024-10-01 13:44:01.271320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.171 [2024-10-01 13:44:01.272514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.171 [2024-10-01 13:44:01.280595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.171 [2024-10-01 13:44:01.280921] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.171 [2024-10-01 13:44:01.280987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.171 [2024-10-01 13:44:01.281022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.171 [2024-10-01 13:44:01.281097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.171 [2024-10-01 13:44:01.281158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.171 [2024-10-01 13:44:01.281193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.171 [2024-10-01 13:44:01.281232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.171 [2024-10-01 13:44:01.281291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.171 [2024-10-01 13:44:01.292010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.171 [2024-10-01 13:44:01.293212] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.171 [2024-10-01 13:44:01.293263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.171 [2024-10-01 13:44:01.293285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.171 [2024-10-01 13:44:01.293939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.171 [2024-10-01 13:44:01.294055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.171 [2024-10-01 13:44:01.294109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.171 [2024-10-01 13:44:01.294136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.171 [2024-10-01 13:44:01.294188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.171 [2024-10-01 13:44:01.303458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.171 [2024-10-01 13:44:01.304354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.171 [2024-10-01 13:44:01.304403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.171 [2024-10-01 13:44:01.304426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.171 [2024-10-01 13:44:01.304625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.171 [2024-10-01 13:44:01.304717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.171 [2024-10-01 13:44:01.304745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.171 [2024-10-01 13:44:01.304761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.171 [2024-10-01 13:44:01.304801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.171 [2024-10-01 13:44:01.313891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.171 [2024-10-01 13:44:01.314079] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.171 [2024-10-01 13:44:01.314137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.171 [2024-10-01 13:44:01.314173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.171 [2024-10-01 13:44:01.315389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.171 [2024-10-01 13:44:01.315733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.171 [2024-10-01 13:44:01.315779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.171 [2024-10-01 13:44:01.315799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.171 [2024-10-01 13:44:01.315851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.171 [2024-10-01 13:44:01.324053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.171 [2024-10-01 13:44:01.324256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.171 [2024-10-01 13:44:01.324310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.171 [2024-10-01 13:44:01.324332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.171 [2024-10-01 13:44:01.325823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.171 [2024-10-01 13:44:01.326940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.171 [2024-10-01 13:44:01.326990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.171 [2024-10-01 13:44:01.327012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.171 [2024-10-01 13:44:01.327160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.171 [2024-10-01 13:44:01.334272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.171 [2024-10-01 13:44:01.334475] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.171 [2024-10-01 13:44:01.334525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.172 [2024-10-01 13:44:01.334586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.172 [2024-10-01 13:44:01.335897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.172 [2024-10-01 13:44:01.336225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.172 [2024-10-01 13:44:01.336271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.172 [2024-10-01 13:44:01.336291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.172 [2024-10-01 13:44:01.337423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.172 [2024-10-01 13:44:01.344444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.172 [2024-10-01 13:44:01.344680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.172 [2024-10-01 13:44:01.344719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.172 [2024-10-01 13:44:01.344739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.172 [2024-10-01 13:44:01.345713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.172 [2024-10-01 13:44:01.345950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.172 [2024-10-01 13:44:01.345988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.172 [2024-10-01 13:44:01.346008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.172 [2024-10-01 13:44:01.346058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.172 [2024-10-01 13:44:01.354628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.172 [2024-10-01 13:44:01.356163] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.172 [2024-10-01 13:44:01.356215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.172 [2024-10-01 13:44:01.356238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.172 [2024-10-01 13:44:01.357207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.172 [2024-10-01 13:44:01.357365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.172 [2024-10-01 13:44:01.357403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.172 [2024-10-01 13:44:01.357423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.172 [2024-10-01 13:44:01.357465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.172 [2024-10-01 13:44:01.366008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.172 [2024-10-01 13:44:01.366151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.172 [2024-10-01 13:44:01.366189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.172 [2024-10-01 13:44:01.366208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.172 [2024-10-01 13:44:01.367358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.172 [2024-10-01 13:44:01.368045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.172 [2024-10-01 13:44:01.368087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.172 [2024-10-01 13:44:01.368106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.172 [2024-10-01 13:44:01.368218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.172 [2024-10-01 13:44:01.376122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.172 [2024-10-01 13:44:01.376246] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.172 [2024-10-01 13:44:01.376281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.172 [2024-10-01 13:44:01.376299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.172 [2024-10-01 13:44:01.376337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.172 [2024-10-01 13:44:01.376391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.172 [2024-10-01 13:44:01.376414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.172 [2024-10-01 13:44:01.376428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.172 [2024-10-01 13:44:01.377657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.172 [2024-10-01 13:44:01.386458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.172 [2024-10-01 13:44:01.386594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.172 [2024-10-01 13:44:01.386629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.172 [2024-10-01 13:44:01.386648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.172 [2024-10-01 13:44:01.386687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.172 [2024-10-01 13:44:01.386724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.172 [2024-10-01 13:44:01.386742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.172 [2024-10-01 13:44:01.386756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.172 [2024-10-01 13:44:01.386793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.172 [2024-10-01 13:44:01.396585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.172 [2024-10-01 13:44:01.396707] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.172 [2024-10-01 13:44:01.396740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.172 [2024-10-01 13:44:01.396758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.172 [2024-10-01 13:44:01.396795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.172 [2024-10-01 13:44:01.396832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.172 [2024-10-01 13:44:01.396850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.172 [2024-10-01 13:44:01.396885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.172 [2024-10-01 13:44:01.396925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.172 [2024-10-01 13:44:01.407213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.172 [2024-10-01 13:44:01.407337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.172 [2024-10-01 13:44:01.407370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.172 [2024-10-01 13:44:01.407388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.172 [2024-10-01 13:44:01.407426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.172 [2024-10-01 13:44:01.407462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.172 [2024-10-01 13:44:01.407480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.172 [2024-10-01 13:44:01.407495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.172 [2024-10-01 13:44:01.407531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.172 [2024-10-01 13:44:01.418151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.172 [2024-10-01 13:44:01.418275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.172 [2024-10-01 13:44:01.418308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.172 [2024-10-01 13:44:01.418327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.172 [2024-10-01 13:44:01.418365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.172 [2024-10-01 13:44:01.418402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.172 [2024-10-01 13:44:01.418420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.172 [2024-10-01 13:44:01.418434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.172 [2024-10-01 13:44:01.418470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.172 [2024-10-01 13:44:01.429505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.172 [2024-10-01 13:44:01.429644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.172 [2024-10-01 13:44:01.429679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.172 [2024-10-01 13:44:01.429698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.172 [2024-10-01 13:44:01.429737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.172 [2024-10-01 13:44:01.429774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.172 [2024-10-01 13:44:01.429793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.172 [2024-10-01 13:44:01.429807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.172 [2024-10-01 13:44:01.429844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.172 [2024-10-01 13:44:01.439686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.172 [2024-10-01 13:44:01.439834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.172 [2024-10-01 13:44:01.439868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.172 [2024-10-01 13:44:01.439900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.172 [2024-10-01 13:44:01.439939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.172 [2024-10-01 13:44:01.439976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.172 [2024-10-01 13:44:01.439994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.172 [2024-10-01 13:44:01.440008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.172 [2024-10-01 13:44:01.440044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.172 [2024-10-01 13:44:01.449836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.172 [2024-10-01 13:44:01.449961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.172 [2024-10-01 13:44:01.449994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.172 [2024-10-01 13:44:01.450013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.172 [2024-10-01 13:44:01.450050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.172 [2024-10-01 13:44:01.450087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.172 [2024-10-01 13:44:01.450105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.172 [2024-10-01 13:44:01.450120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.172 [2024-10-01 13:44:01.450156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.172 [2024-10-01 13:44:01.460318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.172 [2024-10-01 13:44:01.460440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.172 [2024-10-01 13:44:01.460473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.173 [2024-10-01 13:44:01.460491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.173 [2024-10-01 13:44:01.460529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.173 [2024-10-01 13:44:01.460587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.173 [2024-10-01 13:44:01.460606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.173 [2024-10-01 13:44:01.460621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.173 [2024-10-01 13:44:01.460657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.173 [2024-10-01 13:44:01.471385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.173 [2024-10-01 13:44:01.471514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.173 [2024-10-01 13:44:01.471564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.173 [2024-10-01 13:44:01.471585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.173 [2024-10-01 13:44:01.471624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.173 [2024-10-01 13:44:01.471685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.173 [2024-10-01 13:44:01.471705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.173 [2024-10-01 13:44:01.471720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.173 [2024-10-01 13:44:01.471757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.173 [2024-10-01 13:44:01.482434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.173 [2024-10-01 13:44:01.482629] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.173 [2024-10-01 13:44:01.482665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.173 [2024-10-01 13:44:01.482685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.173 [2024-10-01 13:44:01.482725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.173 [2024-10-01 13:44:01.482762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.173 [2024-10-01 13:44:01.482781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.173 [2024-10-01 13:44:01.482797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.173 [2024-10-01 13:44:01.482833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.173 [2024-10-01 13:44:01.493479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.173 [2024-10-01 13:44:01.493617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.173 [2024-10-01 13:44:01.493661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.173 [2024-10-01 13:44:01.493682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.173 [2024-10-01 13:44:01.493721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.173 [2024-10-01 13:44:01.493758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.173 [2024-10-01 13:44:01.493776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.173 [2024-10-01 13:44:01.493790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.173 [2024-10-01 13:44:01.493826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.173 [2024-10-01 13:44:01.503653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.173 [2024-10-01 13:44:01.503784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.173 [2024-10-01 13:44:01.503823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.173 [2024-10-01 13:44:01.503843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.173 [2024-10-01 13:44:01.503892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.173 [2024-10-01 13:44:01.503931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.173 [2024-10-01 13:44:01.503949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.173 [2024-10-01 13:44:01.503964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.173 [2024-10-01 13:44:01.504031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.173 [2024-10-01 13:44:01.514310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.173 [2024-10-01 13:44:01.514441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.173 [2024-10-01 13:44:01.514480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.173 [2024-10-01 13:44:01.514499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.173 [2024-10-01 13:44:01.514551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.173 [2024-10-01 13:44:01.514593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.173 [2024-10-01 13:44:01.514612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.173 [2024-10-01 13:44:01.514626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.173 [2024-10-01 13:44:01.514664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.173 [2024-10-01 13:44:01.525190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.173 [2024-10-01 13:44:01.525313] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.173 [2024-10-01 13:44:01.525346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.173 [2024-10-01 13:44:01.525365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.173 [2024-10-01 13:44:01.525404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.173 [2024-10-01 13:44:01.525441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.173 [2024-10-01 13:44:01.525459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.173 [2024-10-01 13:44:01.525474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.173 [2024-10-01 13:44:01.525509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.173 [2024-10-01 13:44:01.536145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.173 [2024-10-01 13:44:01.536278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.173 [2024-10-01 13:44:01.536312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.173 [2024-10-01 13:44:01.536331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.173 [2024-10-01 13:44:01.536369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.173 [2024-10-01 13:44:01.536406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.173 [2024-10-01 13:44:01.536425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.173 [2024-10-01 13:44:01.536439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.173 [2024-10-01 13:44:01.536475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.173 [2024-10-01 13:44:01.546284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.173 [2024-10-01 13:44:01.546410] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.173 [2024-10-01 13:44:01.546443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.173 [2024-10-01 13:44:01.546491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.173 [2024-10-01 13:44:01.546532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.173 [2024-10-01 13:44:01.546588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.173 [2024-10-01 13:44:01.546606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.173 [2024-10-01 13:44:01.546621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.173 [2024-10-01 13:44:01.546657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.173 [2024-10-01 13:44:01.557019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.173 [2024-10-01 13:44:01.557153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.173 [2024-10-01 13:44:01.557192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.173 [2024-10-01 13:44:01.557212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.173 [2024-10-01 13:44:01.557250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.173 [2024-10-01 13:44:01.557287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.173 [2024-10-01 13:44:01.557306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.173 [2024-10-01 13:44:01.557320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.173 [2024-10-01 13:44:01.557357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.173 [2024-10-01 13:44:01.568085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.173 [2024-10-01 13:44:01.568218] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.173 [2024-10-01 13:44:01.568252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.173 [2024-10-01 13:44:01.568271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.173 [2024-10-01 13:44:01.568309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.173 [2024-10-01 13:44:01.568347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.173 [2024-10-01 13:44:01.568365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.173 [2024-10-01 13:44:01.568379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.173 [2024-10-01 13:44:01.568416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.173 [2024-10-01 13:44:01.579176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.173 [2024-10-01 13:44:01.579357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.173 [2024-10-01 13:44:01.579394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.173 [2024-10-01 13:44:01.579413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.173 [2024-10-01 13:44:01.579453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.173 [2024-10-01 13:44:01.579491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.173 [2024-10-01 13:44:01.579558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.173 [2024-10-01 13:44:01.579578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.173 [2024-10-01 13:44:01.579618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.173 [2024-10-01 13:44:01.589319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.173 [2024-10-01 13:44:01.589442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.173 [2024-10-01 13:44:01.589475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.174 [2024-10-01 13:44:01.589493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.174 [2024-10-01 13:44:01.589531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.174 [2024-10-01 13:44:01.589587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.174 [2024-10-01 13:44:01.589606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.174 [2024-10-01 13:44:01.589621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.174 [2024-10-01 13:44:01.589658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.174 [2024-10-01 13:44:01.601691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.174 [2024-10-01 13:44:01.602003] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.174 [2024-10-01 13:44:01.602051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.174 [2024-10-01 13:44:01.602073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.174 [2024-10-01 13:44:01.603181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.174 [2024-10-01 13:44:01.603848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.174 [2024-10-01 13:44:01.603900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.174 [2024-10-01 13:44:01.603921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.174 [2024-10-01 13:44:01.604265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.174 [2024-10-01 13:44:01.613896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.174 [2024-10-01 13:44:01.614205] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.174 [2024-10-01 13:44:01.614253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.174 [2024-10-01 13:44:01.614276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.174 [2024-10-01 13:44:01.614361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.174 [2024-10-01 13:44:01.614403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.174 [2024-10-01 13:44:01.614426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.174 [2024-10-01 13:44:01.614442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.174 [2024-10-01 13:44:01.615564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.174 [2024-10-01 13:44:01.624960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.174 [2024-10-01 13:44:01.625104] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.174 [2024-10-01 13:44:01.625139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.174 [2024-10-01 13:44:01.625159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.174 [2024-10-01 13:44:01.625199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.174 [2024-10-01 13:44:01.625245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.174 [2024-10-01 13:44:01.625263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.174 [2024-10-01 13:44:01.625278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.174 [2024-10-01 13:44:01.625314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.174 [2024-10-01 13:44:01.635994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.174 [2024-10-01 13:44:01.636141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.174 [2024-10-01 13:44:01.636175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.174 [2024-10-01 13:44:01.636194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.174 [2024-10-01 13:44:01.636233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.174 [2024-10-01 13:44:01.636270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.174 [2024-10-01 13:44:01.636289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.174 [2024-10-01 13:44:01.636304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.174 [2024-10-01 13:44:01.636341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.174 [2024-10-01 13:44:01.646122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.174 [2024-10-01 13:44:01.646249] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.174 [2024-10-01 13:44:01.646283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.174 [2024-10-01 13:44:01.646302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.174 [2024-10-01 13:44:01.646340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.174 [2024-10-01 13:44:01.646376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.174 [2024-10-01 13:44:01.646395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.174 [2024-10-01 13:44:01.646409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.174 [2024-10-01 13:44:01.646445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.174 [2024-10-01 13:44:01.656949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.174 [2024-10-01 13:44:01.657078] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.174 [2024-10-01 13:44:01.657111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.174 [2024-10-01 13:44:01.657131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.174 [2024-10-01 13:44:01.657199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.174 [2024-10-01 13:44:01.657238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.174 [2024-10-01 13:44:01.657256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.174 [2024-10-01 13:44:01.657271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.174 [2024-10-01 13:44:01.657307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.174 8535.44 IOPS, 33.34 MiB/s [2024-10-01 13:44:01.668586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.174 [2024-10-01 13:44:01.670102] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.174 [2024-10-01 13:44:01.670161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.174 [2024-10-01 13:44:01.670187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.174 [2024-10-01 13:44:01.670410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.174 [2024-10-01 13:44:01.671204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.174 [2024-10-01 13:44:01.671245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.174 [2024-10-01 13:44:01.671265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.174 [2024-10-01 13:44:01.671453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.174 [2024-10-01 13:44:01.679109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.174 [2024-10-01 13:44:01.679295] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.174 [2024-10-01 13:44:01.679346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.174 [2024-10-01 13:44:01.679381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.174 [2024-10-01 13:44:01.679439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.174 [2024-10-01 13:44:01.679496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.174 [2024-10-01 13:44:01.679532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.174 [2024-10-01 13:44:01.679588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.174 [2024-10-01 13:44:01.680374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.174 [2024-10-01 13:44:01.689256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.174 [2024-10-01 13:44:01.689450] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.174 [2024-10-01 13:44:01.689488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.174 [2024-10-01 13:44:01.689508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.174 [2024-10-01 13:44:01.689797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.174 [2024-10-01 13:44:01.689977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.174 [2024-10-01 13:44:01.690011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.174 [2024-10-01 13:44:01.690061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.174 [2024-10-01 13:44:01.690183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.174 [2024-10-01 13:44:01.699698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.174 [2024-10-01 13:44:01.699831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.174 [2024-10-01 13:44:01.699866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.174 [2024-10-01 13:44:01.699902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.174 [2024-10-01 13:44:01.699942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.174 [2024-10-01 13:44:01.699980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.174 [2024-10-01 13:44:01.699998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.174 [2024-10-01 13:44:01.700013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.174 [2024-10-01 13:44:01.701144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.174 [2024-10-01 13:44:01.710684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.174 [2024-10-01 13:44:01.710814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.174 [2024-10-01 13:44:01.710849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.174 [2024-10-01 13:44:01.710867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.174 [2024-10-01 13:44:01.710906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.174 [2024-10-01 13:44:01.710942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.174 [2024-10-01 13:44:01.710961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.174 [2024-10-01 13:44:01.710976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.174 [2024-10-01 13:44:01.711012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.174 [2024-10-01 13:44:01.721799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.174 [2024-10-01 13:44:01.721929] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.174 [2024-10-01 13:44:01.721963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.175 [2024-10-01 13:44:01.721982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.175 [2024-10-01 13:44:01.722021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.175 [2024-10-01 13:44:01.722057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.175 [2024-10-01 13:44:01.722075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.175 [2024-10-01 13:44:01.722090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.175 [2024-10-01 13:44:01.722126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.175 [2024-10-01 13:44:01.731963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.175 [2024-10-01 13:44:01.732121] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.175 [2024-10-01 13:44:01.732167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.175 [2024-10-01 13:44:01.732185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.175 [2024-10-01 13:44:01.732223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.175 [2024-10-01 13:44:01.732277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.175 [2024-10-01 13:44:01.732300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.175 [2024-10-01 13:44:01.732315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.175 [2024-10-01 13:44:01.732352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.175 [2024-10-01 13:44:01.742717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.175 [2024-10-01 13:44:01.742912] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.175 [2024-10-01 13:44:01.742949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.175 [2024-10-01 13:44:01.742968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.175 [2024-10-01 13:44:01.743010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.175 [2024-10-01 13:44:01.743047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.175 [2024-10-01 13:44:01.743066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.175 [2024-10-01 13:44:01.743082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.175 [2024-10-01 13:44:01.744211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.175 [2024-10-01 13:44:01.753827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.175 [2024-10-01 13:44:01.754025] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.175 [2024-10-01 13:44:01.754061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.175 [2024-10-01 13:44:01.754081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.175 [2024-10-01 13:44:01.754123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.175 [2024-10-01 13:44:01.754160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.175 [2024-10-01 13:44:01.754179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.175 [2024-10-01 13:44:01.754194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.175 [2024-10-01 13:44:01.754232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.175 [2024-10-01 13:44:01.764864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.175 [2024-10-01 13:44:01.764989] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.175 [2024-10-01 13:44:01.765022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.175 [2024-10-01 13:44:01.765041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.175 [2024-10-01 13:44:01.765078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.175 [2024-10-01 13:44:01.765157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.175 [2024-10-01 13:44:01.765179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.175 [2024-10-01 13:44:01.765194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.175 [2024-10-01 13:44:01.765230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.175 [2024-10-01 13:44:01.774989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.175 [2024-10-01 13:44:01.775119] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.175 [2024-10-01 13:44:01.775152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.175 [2024-10-01 13:44:01.775178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.175 [2024-10-01 13:44:01.775216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.175 [2024-10-01 13:44:01.775253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.175 [2024-10-01 13:44:01.775271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.175 [2024-10-01 13:44:01.775285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.175 [2024-10-01 13:44:01.775322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.175 [2024-10-01 13:44:01.785631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.175 [2024-10-01 13:44:01.785754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.175 [2024-10-01 13:44:01.785787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.175 [2024-10-01 13:44:01.785805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.175 [2024-10-01 13:44:01.785843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.175 [2024-10-01 13:44:01.785880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.175 [2024-10-01 13:44:01.785898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.175 [2024-10-01 13:44:01.785912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.175 [2024-10-01 13:44:01.785949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.175 [2024-10-01 13:44:01.796477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.175 [2024-10-01 13:44:01.796614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.175 [2024-10-01 13:44:01.796654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.175 [2024-10-01 13:44:01.796674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.175 [2024-10-01 13:44:01.796712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.175 [2024-10-01 13:44:01.796749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.175 [2024-10-01 13:44:01.796767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.175 [2024-10-01 13:44:01.796782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.175 [2024-10-01 13:44:01.796843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.175 [2024-10-01 13:44:01.807477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.175 [2024-10-01 13:44:01.807614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.175 [2024-10-01 13:44:01.807649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.175 [2024-10-01 13:44:01.807668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.175 [2024-10-01 13:44:01.807707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.175 [2024-10-01 13:44:01.807743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.175 [2024-10-01 13:44:01.807761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.175 [2024-10-01 13:44:01.807776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.175 [2024-10-01 13:44:01.807811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.175 [2024-10-01 13:44:01.817603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.175 [2024-10-01 13:44:01.817724] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.175 [2024-10-01 13:44:01.817757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.175 [2024-10-01 13:44:01.817776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.175 [2024-10-01 13:44:01.817814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.175 [2024-10-01 13:44:01.817851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.175 [2024-10-01 13:44:01.817870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.175 [2024-10-01 13:44:01.817884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.175 [2024-10-01 13:44:01.817921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.175 [2024-10-01 13:44:01.828241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.175 [2024-10-01 13:44:01.828371] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.175 [2024-10-01 13:44:01.828413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.175 [2024-10-01 13:44:01.828433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.175 [2024-10-01 13:44:01.828472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.175 [2024-10-01 13:44:01.828509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.175 [2024-10-01 13:44:01.828527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.175 [2024-10-01 13:44:01.828561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.175 [2024-10-01 13:44:01.828601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.175 [2024-10-01 13:44:01.839144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.175 [2024-10-01 13:44:01.839267] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.175 [2024-10-01 13:44:01.839301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.175 [2024-10-01 13:44:01.839341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.175 [2024-10-01 13:44:01.839382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.175 [2024-10-01 13:44:01.839420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.175 [2024-10-01 13:44:01.839438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.176 [2024-10-01 13:44:01.839452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.176 [2024-10-01 13:44:01.839489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.176 [2024-10-01 13:44:01.850150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.176 [2024-10-01 13:44:01.850275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.176 [2024-10-01 13:44:01.850318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.176 [2024-10-01 13:44:01.850340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.176 [2024-10-01 13:44:01.850378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.176 [2024-10-01 13:44:01.850416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.176 [2024-10-01 13:44:01.850434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.176 [2024-10-01 13:44:01.850448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.176 [2024-10-01 13:44:01.850485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.176 [2024-10-01 13:44:01.860258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.176 [2024-10-01 13:44:01.860382] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.176 [2024-10-01 13:44:01.860414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.176 [2024-10-01 13:44:01.860433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.176 [2024-10-01 13:44:01.860470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.176 [2024-10-01 13:44:01.860508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.176 [2024-10-01 13:44:01.860526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.176 [2024-10-01 13:44:01.860558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.176 [2024-10-01 13:44:01.860598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.176 [2024-10-01 13:44:01.870876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.176 [2024-10-01 13:44:01.871003] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.176 [2024-10-01 13:44:01.871035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.176 [2024-10-01 13:44:01.871054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.176 [2024-10-01 13:44:01.871091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.176 [2024-10-01 13:44:01.871126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.176 [2024-10-01 13:44:01.871165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.176 [2024-10-01 13:44:01.871181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.176 [2024-10-01 13:44:01.871218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.176 [2024-10-01 13:44:01.881829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.176 [2024-10-01 13:44:01.881954] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.176 [2024-10-01 13:44:01.881987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.176 [2024-10-01 13:44:01.882005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.176 [2024-10-01 13:44:01.882043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.176 [2024-10-01 13:44:01.882080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.176 [2024-10-01 13:44:01.882097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.176 [2024-10-01 13:44:01.882111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.176 [2024-10-01 13:44:01.882147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.176 [2024-10-01 13:44:01.892771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.176 [2024-10-01 13:44:01.892891] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.176 [2024-10-01 13:44:01.892924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.176 [2024-10-01 13:44:01.892942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.176 [2024-10-01 13:44:01.892979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.176 [2024-10-01 13:44:01.893015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.176 [2024-10-01 13:44:01.893031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.176 [2024-10-01 13:44:01.893045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.176 [2024-10-01 13:44:01.893080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.176 [2024-10-01 13:44:01.902876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.176 [2024-10-01 13:44:01.902998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.176 [2024-10-01 13:44:01.903030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.176 [2024-10-01 13:44:01.903048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.176 [2024-10-01 13:44:01.903085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.176 [2024-10-01 13:44:01.903121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.176 [2024-10-01 13:44:01.903138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.176 [2024-10-01 13:44:01.903153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.176 [2024-10-01 13:44:01.903188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.176 [2024-10-01 13:44:01.913492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.176 [2024-10-01 13:44:01.913629] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.176 [2024-10-01 13:44:01.913663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.176 [2024-10-01 13:44:01.913682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.176 [2024-10-01 13:44:01.913719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.176 [2024-10-01 13:44:01.913756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.176 [2024-10-01 13:44:01.913774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.176 [2024-10-01 13:44:01.913788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.176 [2024-10-01 13:44:01.913824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.176 [2024-10-01 13:44:01.924345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.176 [2024-10-01 13:44:01.924466] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.176 [2024-10-01 13:44:01.924498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.176 [2024-10-01 13:44:01.924516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.176 [2024-10-01 13:44:01.924571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.176 [2024-10-01 13:44:01.924611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.176 [2024-10-01 13:44:01.924630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.176 [2024-10-01 13:44:01.924644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.176 [2024-10-01 13:44:01.924679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.176 [2024-10-01 13:44:01.935301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.176 [2024-10-01 13:44:01.935424] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.176 [2024-10-01 13:44:01.935456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.176 [2024-10-01 13:44:01.935474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.176 [2024-10-01 13:44:01.935512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.176 [2024-10-01 13:44:01.935563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.176 [2024-10-01 13:44:01.935583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.176 [2024-10-01 13:44:01.935597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.176 [2024-10-01 13:44:01.935633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.176 [2024-10-01 13:44:01.945400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.176 [2024-10-01 13:44:01.945522] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.176 [2024-10-01 13:44:01.945570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.176 [2024-10-01 13:44:01.945612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.176 [2024-10-01 13:44:01.945654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.176 [2024-10-01 13:44:01.945691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.176 [2024-10-01 13:44:01.945709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.176 [2024-10-01 13:44:01.945723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.176 [2024-10-01 13:44:01.945759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.176 [2024-10-01 13:44:01.956013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.176 [2024-10-01 13:44:01.956138] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.176 [2024-10-01 13:44:01.956171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.176 [2024-10-01 13:44:01.956190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.176 [2024-10-01 13:44:01.956227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.176 [2024-10-01 13:44:01.956264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.176 [2024-10-01 13:44:01.956282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.176 [2024-10-01 13:44:01.956296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.176 [2024-10-01 13:44:01.956332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.176 [2024-10-01 13:44:01.966881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.176 [2024-10-01 13:44:01.967013] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.176 [2024-10-01 13:44:01.967046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.176 [2024-10-01 13:44:01.967065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.176 [2024-10-01 13:44:01.967102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.176 [2024-10-01 13:44:01.967139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.176 [2024-10-01 13:44:01.967156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.176 [2024-10-01 13:44:01.967170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.176 [2024-10-01 13:44:01.967207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.176 [2024-10-01 13:44:01.977851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.176 [2024-10-01 13:44:01.977974] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.176 [2024-10-01 13:44:01.978007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.176 [2024-10-01 13:44:01.978025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.177 [2024-10-01 13:44:01.978063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.177 [2024-10-01 13:44:01.978099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.177 [2024-10-01 13:44:01.978117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.177 [2024-10-01 13:44:01.978159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.177 [2024-10-01 13:44:01.978198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.177 [2024-10-01 13:44:01.987961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.177 [2024-10-01 13:44:01.988084] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.177 [2024-10-01 13:44:01.988116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.177 [2024-10-01 13:44:01.988135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.177 [2024-10-01 13:44:01.988172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.177 [2024-10-01 13:44:01.988208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.177 [2024-10-01 13:44:01.988226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.177 [2024-10-01 13:44:01.988241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.177 [2024-10-01 13:44:01.988277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.177 [2024-10-01 13:44:01.998570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.177 [2024-10-01 13:44:01.998694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.177 [2024-10-01 13:44:01.998726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.177 [2024-10-01 13:44:01.998744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.177 [2024-10-01 13:44:01.998782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.177 [2024-10-01 13:44:01.998819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.177 [2024-10-01 13:44:01.998836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.177 [2024-10-01 13:44:01.998850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.177 [2024-10-01 13:44:01.998886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.177 [2024-10-01 13:44:02.009469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.177 [2024-10-01 13:44:02.009616] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.177 [2024-10-01 13:44:02.009656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.177 [2024-10-01 13:44:02.009676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.177 [2024-10-01 13:44:02.009714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.177 [2024-10-01 13:44:02.009751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.177 [2024-10-01 13:44:02.009769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.177 [2024-10-01 13:44:02.009784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.177 [2024-10-01 13:44:02.009821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.177 [2024-10-01 13:44:02.020413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.177 [2024-10-01 13:44:02.020580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.177 [2024-10-01 13:44:02.020625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.177 [2024-10-01 13:44:02.020644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.177 [2024-10-01 13:44:02.020683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.177 [2024-10-01 13:44:02.020721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.177 [2024-10-01 13:44:02.020738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.177 [2024-10-01 13:44:02.020752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.177 [2024-10-01 13:44:02.020789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.177 [2024-10-01 13:44:02.030559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.177 [2024-10-01 13:44:02.030682] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.177 [2024-10-01 13:44:02.030715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.177 [2024-10-01 13:44:02.030734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.177 [2024-10-01 13:44:02.030772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.177 [2024-10-01 13:44:02.030808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.177 [2024-10-01 13:44:02.030826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.177 [2024-10-01 13:44:02.030841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.177 [2024-10-01 13:44:02.030877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.177 [2024-10-01 13:44:02.041130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.177 [2024-10-01 13:44:02.041253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.177 [2024-10-01 13:44:02.041287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.177 [2024-10-01 13:44:02.041305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.177 [2024-10-01 13:44:02.041344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.177 [2024-10-01 13:44:02.041380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.177 [2024-10-01 13:44:02.041398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.177 [2024-10-01 13:44:02.041412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.177 [2024-10-01 13:44:02.041449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.177 [2024-10-01 13:44:02.052009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.177 [2024-10-01 13:44:02.052131] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.177 [2024-10-01 13:44:02.052164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.177 [2024-10-01 13:44:02.052182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.177 [2024-10-01 13:44:02.052239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.177 [2024-10-01 13:44:02.052277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.177 [2024-10-01 13:44:02.052296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.177 [2024-10-01 13:44:02.052310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.177 [2024-10-01 13:44:02.052346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.177 [2024-10-01 13:44:02.062939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.177 [2024-10-01 13:44:02.063062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.177 [2024-10-01 13:44:02.063096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.177 [2024-10-01 13:44:02.063114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.177 [2024-10-01 13:44:02.063151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.177 [2024-10-01 13:44:02.063188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.177 [2024-10-01 13:44:02.063205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.177 [2024-10-01 13:44:02.063219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.177 [2024-10-01 13:44:02.063255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.177 [2024-10-01 13:44:02.073050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.177 [2024-10-01 13:44:02.073171] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.177 [2024-10-01 13:44:02.073203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.177 [2024-10-01 13:44:02.073222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.177 [2024-10-01 13:44:02.073260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.177 [2024-10-01 13:44:02.073296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.177 [2024-10-01 13:44:02.073314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.177 [2024-10-01 13:44:02.073329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.177 [2024-10-01 13:44:02.073365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.177 [2024-10-01 13:44:02.083739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.177 [2024-10-01 13:44:02.083886] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.177 [2024-10-01 13:44:02.083922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.177 [2024-10-01 13:44:02.083941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.177 [2024-10-01 13:44:02.083980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.177 [2024-10-01 13:44:02.084018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.177 [2024-10-01 13:44:02.084036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.177 [2024-10-01 13:44:02.084050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.177 [2024-10-01 13:44:02.084107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.177 [2024-10-01 13:44:02.093853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.177 [2024-10-01 13:44:02.094913] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.177 [2024-10-01 13:44:02.094960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.177 [2024-10-01 13:44:02.094989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.177 [2024-10-01 13:44:02.095217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.177 [2024-10-01 13:44:02.095291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.177 [2024-10-01 13:44:02.095314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.177 [2024-10-01 13:44:02.095336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.177 [2024-10-01 13:44:02.095376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.177 [2024-10-01 13:44:02.106348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.177 [2024-10-01 13:44:02.106480] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.177 [2024-10-01 13:44:02.106514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.177 [2024-10-01 13:44:02.106547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.177 [2024-10-01 13:44:02.106591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.177 [2024-10-01 13:44:02.106628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.178 [2024-10-01 13:44:02.106647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.178 [2024-10-01 13:44:02.106662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.178 [2024-10-01 13:44:02.106711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.178 [2024-10-01 13:44:02.116741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.178 [2024-10-01 13:44:02.116872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.178 [2024-10-01 13:44:02.116906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.178 [2024-10-01 13:44:02.116924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.178 [2024-10-01 13:44:02.116962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.178 [2024-10-01 13:44:02.116999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.178 [2024-10-01 13:44:02.117017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.178 [2024-10-01 13:44:02.117031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.178 [2024-10-01 13:44:02.117068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.178 [2024-10-01 13:44:02.127917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.178 [2024-10-01 13:44:02.128221] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.178 [2024-10-01 13:44:02.128292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.178 [2024-10-01 13:44:02.128315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.178 [2024-10-01 13:44:02.128404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.178 [2024-10-01 13:44:02.128446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.178 [2024-10-01 13:44:02.128465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.178 [2024-10-01 13:44:02.128479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.178 [2024-10-01 13:44:02.128516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.178 [2024-10-01 13:44:02.138034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.178 [2024-10-01 13:44:02.138171] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.178 [2024-10-01 13:44:02.138205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.178 [2024-10-01 13:44:02.138224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.178 [2024-10-01 13:44:02.138263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.178 [2024-10-01 13:44:02.138299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.178 [2024-10-01 13:44:02.138318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.178 [2024-10-01 13:44:02.138332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.178 [2024-10-01 13:44:02.138368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.178 [2024-10-01 13:44:02.148139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.178 [2024-10-01 13:44:02.148264] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.178 [2024-10-01 13:44:02.148310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.178 [2024-10-01 13:44:02.148329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.178 [2024-10-01 13:44:02.148366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.178 [2024-10-01 13:44:02.148402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.178 [2024-10-01 13:44:02.148419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.178 [2024-10-01 13:44:02.148433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.178 [2024-10-01 13:44:02.148469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.178 [2024-10-01 13:44:02.159408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.178 [2024-10-01 13:44:02.159547] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.178 [2024-10-01 13:44:02.159582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.178 [2024-10-01 13:44:02.159601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.178 [2024-10-01 13:44:02.160704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.178 [2024-10-01 13:44:02.161381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.178 [2024-10-01 13:44:02.161422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.178 [2024-10-01 13:44:02.161440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.178 [2024-10-01 13:44:02.161547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.178 [2024-10-01 13:44:02.169519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.178 [2024-10-01 13:44:02.169656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.178 [2024-10-01 13:44:02.169690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.178 [2024-10-01 13:44:02.169709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.178 [2024-10-01 13:44:02.169752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.178 [2024-10-01 13:44:02.169791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.178 [2024-10-01 13:44:02.169809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.178 [2024-10-01 13:44:02.169824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.178 [2024-10-01 13:44:02.169859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.178 [2024-10-01 13:44:02.180286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.178 [2024-10-01 13:44:02.180413] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.178 [2024-10-01 13:44:02.180448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.178 [2024-10-01 13:44:02.180467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.178 [2024-10-01 13:44:02.180505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.178 [2024-10-01 13:44:02.180557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.178 [2024-10-01 13:44:02.180578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.178 [2024-10-01 13:44:02.180593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.178 [2024-10-01 13:44:02.180629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.178 [2024-10-01 13:44:02.180988] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb23a00 was disconnected and freed. reset controller. 00:16:17.178 [2024-10-01 13:44:02.181054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.178 [2024-10-01 13:44:02.181129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.178 [2024-10-01 13:44:02.184489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.178 [2024-10-01 13:44:02.184564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.178 [2024-10-01 13:44:02.184587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.178 [2024-10-01 13:44:02.184602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.178 [2024-10-01 13:44:02.184636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.178 [2024-10-01 13:44:02.191155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.178 [2024-10-01 13:44:02.191309] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.178 [2024-10-01 13:44:02.191344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.178 [2024-10-01 13:44:02.191362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.178 [2024-10-01 13:44:02.191412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.178 [2024-10-01 13:44:02.191455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.178 [2024-10-01 13:44:02.191489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.178 [2024-10-01 13:44:02.191507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.178 [2024-10-01 13:44:02.191521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.178 [2024-10-01 13:44:02.191568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.178 [2024-10-01 13:44:02.191633] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.178 [2024-10-01 13:44:02.191661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.178 [2024-10-01 13:44:02.191678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.178 [2024-10-01 13:44:02.191973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.178 [2024-10-01 13:44:02.192144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.178 [2024-10-01 13:44:02.192180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.178 [2024-10-01 13:44:02.192197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.178 [2024-10-01 13:44:02.192310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.178 [2024-10-01 13:44:02.202201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.178 [2024-10-01 13:44:02.202256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.178 [2024-10-01 13:44:02.202358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.179 [2024-10-01 13:44:02.202391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.179 [2024-10-01 13:44:02.202409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.179 [2024-10-01 13:44:02.202460] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.179 [2024-10-01 13:44:02.202496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.179 [2024-10-01 13:44:02.202516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.179 [2024-10-01 13:44:02.202566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.179 [2024-10-01 13:44:02.202593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.179 [2024-10-01 13:44:02.203711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.179 [2024-10-01 13:44:02.203751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.179 [2024-10-01 13:44:02.203769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.179 [2024-10-01 13:44:02.203807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.179 [2024-10-01 13:44:02.203827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.179 [2024-10-01 13:44:02.203840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.179 [2024-10-01 13:44:02.204070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.179 [2024-10-01 13:44:02.204099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.179 [2024-10-01 13:44:02.212337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.179 [2024-10-01 13:44:02.212420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.179 [2024-10-01 13:44:02.212509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.179 [2024-10-01 13:44:02.212557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.179 [2024-10-01 13:44:02.212579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.179 [2024-10-01 13:44:02.212660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.179 [2024-10-01 13:44:02.212691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.179 [2024-10-01 13:44:02.212709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.179 [2024-10-01 13:44:02.212729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.179 [2024-10-01 13:44:02.213677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.179 [2024-10-01 13:44:02.213720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.179 [2024-10-01 13:44:02.213739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.179 [2024-10-01 13:44:02.213753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.179 [2024-10-01 13:44:02.213947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.179 [2024-10-01 13:44:02.213974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.179 [2024-10-01 13:44:02.213989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.179 [2024-10-01 13:44:02.214004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.179 [2024-10-01 13:44:02.214044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.179 [2024-10-01 13:44:02.222447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.179 [2024-10-01 13:44:02.222583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.179 [2024-10-01 13:44:02.222622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.179 [2024-10-01 13:44:02.222642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.179 [2024-10-01 13:44:02.222693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.179 [2024-10-01 13:44:02.222735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.179 [2024-10-01 13:44:02.222768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.179 [2024-10-01 13:44:02.222808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.179 [2024-10-01 13:44:02.222824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.179 [2024-10-01 13:44:02.224198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.179 [2024-10-01 13:44:02.224299] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.179 [2024-10-01 13:44:02.224332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.179 [2024-10-01 13:44:02.224350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.179 [2024-10-01 13:44:02.225297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.179 [2024-10-01 13:44:02.225445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.179 [2024-10-01 13:44:02.225472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.179 [2024-10-01 13:44:02.225488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.179 [2024-10-01 13:44:02.225523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.179 [2024-10-01 13:44:02.233741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.179 [2024-10-01 13:44:02.233812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.179 [2024-10-01 13:44:02.233927] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.179 [2024-10-01 13:44:02.233961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.179 [2024-10-01 13:44:02.233980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.179 [2024-10-01 13:44:02.234032] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.179 [2024-10-01 13:44:02.234058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.179 [2024-10-01 13:44:02.234081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.179 [2024-10-01 13:44:02.235181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.179 [2024-10-01 13:44:02.235236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.179 [2024-10-01 13:44:02.235925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.179 [2024-10-01 13:44:02.235967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.179 [2024-10-01 13:44:02.235987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.179 [2024-10-01 13:44:02.236006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.179 [2024-10-01 13:44:02.236021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.179 [2024-10-01 13:44:02.236035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.179 [2024-10-01 13:44:02.236368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.179 [2024-10-01 13:44:02.236407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.179 [2024-10-01 13:44:02.243936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.179 [2024-10-01 13:44:02.244044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.179 [2024-10-01 13:44:02.244160] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.179 [2024-10-01 13:44:02.244192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.179 [2024-10-01 13:44:02.244212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.179 [2024-10-01 13:44:02.244283] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.179 [2024-10-01 13:44:02.244311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.179 [2024-10-01 13:44:02.244328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.179 [2024-10-01 13:44:02.244347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.179 [2024-10-01 13:44:02.245622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.179 [2024-10-01 13:44:02.245670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.179 [2024-10-01 13:44:02.245689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.179 [2024-10-01 13:44:02.245704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.179 [2024-10-01 13:44:02.245940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.179 [2024-10-01 13:44:02.245970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.179 [2024-10-01 13:44:02.245993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.179 [2024-10-01 13:44:02.246009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.179 [2024-10-01 13:44:02.246804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.179 [2024-10-01 13:44:02.254676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.179 [2024-10-01 13:44:02.254736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.179 [2024-10-01 13:44:02.254861] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.179 [2024-10-01 13:44:02.254896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.179 [2024-10-01 13:44:02.254916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.179 [2024-10-01 13:44:02.254968] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.179 [2024-10-01 13:44:02.255006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.179 [2024-10-01 13:44:02.255027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.179 [2024-10-01 13:44:02.255068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.179 [2024-10-01 13:44:02.255094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.179 [2024-10-01 13:44:02.255121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.179 [2024-10-01 13:44:02.255138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.179 [2024-10-01 13:44:02.255153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.179 [2024-10-01 13:44:02.255170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.179 [2024-10-01 13:44:02.255212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.179 [2024-10-01 13:44:02.255231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.179 [2024-10-01 13:44:02.255267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.179 [2024-10-01 13:44:02.255288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.179 [2024-10-01 13:44:02.265298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.179 [2024-10-01 13:44:02.265356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.179 [2024-10-01 13:44:02.265460] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.179 [2024-10-01 13:44:02.265503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.179 [2024-10-01 13:44:02.265524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.179 [2024-10-01 13:44:02.265595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.179 [2024-10-01 13:44:02.265623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.180 [2024-10-01 13:44:02.265641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.180 [2024-10-01 13:44:02.265675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.180 [2024-10-01 13:44:02.265699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.180 [2024-10-01 13:44:02.265973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.180 [2024-10-01 13:44:02.266016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.180 [2024-10-01 13:44:02.266046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.180 [2024-10-01 13:44:02.266073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.180 [2024-10-01 13:44:02.266097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.180 [2024-10-01 13:44:02.266111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.180 [2024-10-01 13:44:02.266251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.180 [2024-10-01 13:44:02.266287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.180 [2024-10-01 13:44:02.276288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.180 [2024-10-01 13:44:02.276342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.180 [2024-10-01 13:44:02.276450] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.180 [2024-10-01 13:44:02.276484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.180 [2024-10-01 13:44:02.276502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.180 [2024-10-01 13:44:02.276573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.180 [2024-10-01 13:44:02.276601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.180 [2024-10-01 13:44:02.276618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.180 [2024-10-01 13:44:02.276656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.180 [2024-10-01 13:44:02.276725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.180 [2024-10-01 13:44:02.277856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.180 [2024-10-01 13:44:02.277904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.180 [2024-10-01 13:44:02.277934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.180 [2024-10-01 13:44:02.277956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.180 [2024-10-01 13:44:02.277972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.180 [2024-10-01 13:44:02.277986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.180 [2024-10-01 13:44:02.278233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.180 [2024-10-01 13:44:02.278272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.180 [2024-10-01 13:44:02.287353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.180 [2024-10-01 13:44:02.287636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.180 [2024-10-01 13:44:02.287746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.180 [2024-10-01 13:44:02.287780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.180 [2024-10-01 13:44:02.287799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.180 [2024-10-01 13:44:02.287893] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.180 [2024-10-01 13:44:02.287924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.180 [2024-10-01 13:44:02.287942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.180 [2024-10-01 13:44:02.287961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.180 [2024-10-01 13:44:02.287996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.180 [2024-10-01 13:44:02.288017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.180 [2024-10-01 13:44:02.288031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.180 [2024-10-01 13:44:02.288046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.180 [2024-10-01 13:44:02.288095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.180 [2024-10-01 13:44:02.288123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.180 [2024-10-01 13:44:02.288138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.180 [2024-10-01 13:44:02.288152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.180 [2024-10-01 13:44:02.288184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.180 [2024-10-01 13:44:02.298805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.180 [2024-10-01 13:44:02.298859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.180 [2024-10-01 13:44:02.298961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.180 [2024-10-01 13:44:02.299012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.180 [2024-10-01 13:44:02.299043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.180 [2024-10-01 13:44:02.299096] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.180 [2024-10-01 13:44:02.299122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.180 [2024-10-01 13:44:02.299138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.180 [2024-10-01 13:44:02.299172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.180 [2024-10-01 13:44:02.299195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.180 [2024-10-01 13:44:02.299222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.180 [2024-10-01 13:44:02.299239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.180 [2024-10-01 13:44:02.299254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.180 [2024-10-01 13:44:02.299272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.180 [2024-10-01 13:44:02.299287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.180 [2024-10-01 13:44:02.299301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.180 [2024-10-01 13:44:02.299333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.180 [2024-10-01 13:44:02.299353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.180 [2024-10-01 13:44:02.309355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.180 [2024-10-01 13:44:02.309407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.180 [2024-10-01 13:44:02.309508] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.180 [2024-10-01 13:44:02.309566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.180 [2024-10-01 13:44:02.309589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.180 [2024-10-01 13:44:02.309648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.180 [2024-10-01 13:44:02.309674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.180 [2024-10-01 13:44:02.309691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.180 [2024-10-01 13:44:02.309726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.180 [2024-10-01 13:44:02.309751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.180 [2024-10-01 13:44:02.309778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.180 [2024-10-01 13:44:02.309803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.180 [2024-10-01 13:44:02.309822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.180 [2024-10-01 13:44:02.309839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.180 [2024-10-01 13:44:02.309854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.180 [2024-10-01 13:44:02.309890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.180 [2024-10-01 13:44:02.310158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.180 [2024-10-01 13:44:02.310197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.180 [2024-10-01 13:44:02.320263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.180 [2024-10-01 13:44:02.320501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.180 [2024-10-01 13:44:02.320618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.180 [2024-10-01 13:44:02.320652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.180 [2024-10-01 13:44:02.320670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.180 [2024-10-01 13:44:02.320796] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.180 [2024-10-01 13:44:02.320832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.180 [2024-10-01 13:44:02.320849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.180 [2024-10-01 13:44:02.320868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.180 [2024-10-01 13:44:02.321975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.180 [2024-10-01 13:44:02.322023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.180 [2024-10-01 13:44:02.322042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.180 [2024-10-01 13:44:02.322056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.180 [2024-10-01 13:44:02.322295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.180 [2024-10-01 13:44:02.322324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.180 [2024-10-01 13:44:02.322340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.180 [2024-10-01 13:44:02.322355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.180 [2024-10-01 13:44:02.323441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.180 [2024-10-01 13:44:02.330364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.180 [2024-10-01 13:44:02.330487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.180 [2024-10-01 13:44:02.330521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.180 [2024-10-01 13:44:02.330556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.180 [2024-10-01 13:44:02.331492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.180 [2024-10-01 13:44:02.331772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.180 [2024-10-01 13:44:02.331813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.180 [2024-10-01 13:44:02.331831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.180 [2024-10-01 13:44:02.331890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.180 [2024-10-01 13:44:02.331919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.180 [2024-10-01 13:44:02.332026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.180 [2024-10-01 13:44:02.332059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.180 [2024-10-01 13:44:02.332077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.180 [2024-10-01 13:44:02.332110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.181 [2024-10-01 13:44:02.332142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.181 [2024-10-01 13:44:02.332160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.181 [2024-10-01 13:44:02.332174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.181 [2024-10-01 13:44:02.332205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.181 [2024-10-01 13:44:02.340458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.181 [2024-10-01 13:44:02.340600] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.181 [2024-10-01 13:44:02.340634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.181 [2024-10-01 13:44:02.340653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.181 [2024-10-01 13:44:02.342003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.181 [2024-10-01 13:44:02.342986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.181 [2024-10-01 13:44:02.343028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.181 [2024-10-01 13:44:02.343047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.181 [2024-10-01 13:44:02.343178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.181 [2024-10-01 13:44:02.343216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.181 [2024-10-01 13:44:02.343301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.181 [2024-10-01 13:44:02.343334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.181 [2024-10-01 13:44:02.343352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.181 [2024-10-01 13:44:02.343385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.181 [2024-10-01 13:44:02.343417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.181 [2024-10-01 13:44:02.343435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.181 [2024-10-01 13:44:02.343449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.181 [2024-10-01 13:44:02.343488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.181 [2024-10-01 13:44:02.351383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.181 [2024-10-01 13:44:02.351507] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.181 [2024-10-01 13:44:02.351553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.181 [2024-10-01 13:44:02.351575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.181 [2024-10-01 13:44:02.352691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.181 [2024-10-01 13:44:02.353371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.181 [2024-10-01 13:44:02.353412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.181 [2024-10-01 13:44:02.353431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.181 [2024-10-01 13:44:02.353552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.181 [2024-10-01 13:44:02.353598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.181 [2024-10-01 13:44:02.353692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.181 [2024-10-01 13:44:02.353724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.181 [2024-10-01 13:44:02.353743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.181 [2024-10-01 13:44:02.354023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.181 [2024-10-01 13:44:02.354191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.181 [2024-10-01 13:44:02.354237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.181 [2024-10-01 13:44:02.354255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.181 [2024-10-01 13:44:02.354368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.181 [2024-10-01 13:44:02.361487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.181 [2024-10-01 13:44:02.361624] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.181 [2024-10-01 13:44:02.361659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.181 [2024-10-01 13:44:02.361678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.181 [2024-10-01 13:44:02.361712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.181 [2024-10-01 13:44:02.361755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.181 [2024-10-01 13:44:02.361776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.181 [2024-10-01 13:44:02.361790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.181 [2024-10-01 13:44:02.361823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.181 [2024-10-01 13:44:02.364370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.181 [2024-10-01 13:44:02.364492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.181 [2024-10-01 13:44:02.364552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.181 [2024-10-01 13:44:02.364575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.181 [2024-10-01 13:44:02.364610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.181 [2024-10-01 13:44:02.364643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.181 [2024-10-01 13:44:02.364661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.181 [2024-10-01 13:44:02.364692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.181 [2024-10-01 13:44:02.364727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.181 [2024-10-01 13:44:02.372170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.181 [2024-10-01 13:44:02.372298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.181 [2024-10-01 13:44:02.372332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.181 [2024-10-01 13:44:02.372350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.181 [2024-10-01 13:44:02.372384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.181 [2024-10-01 13:44:02.372417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.181 [2024-10-01 13:44:02.372435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.181 [2024-10-01 13:44:02.372450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.181 [2024-10-01 13:44:02.372482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.181 [2024-10-01 13:44:02.375418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.181 [2024-10-01 13:44:02.375550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.181 [2024-10-01 13:44:02.375585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.181 [2024-10-01 13:44:02.375603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.181 [2024-10-01 13:44:02.375637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.181 [2024-10-01 13:44:02.375670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.181 [2024-10-01 13:44:02.375688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.181 [2024-10-01 13:44:02.375702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.181 [2024-10-01 13:44:02.375734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.181 [2024-10-01 13:44:02.382338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.181 [2024-10-01 13:44:02.382473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.181 [2024-10-01 13:44:02.382507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.181 [2024-10-01 13:44:02.382526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.181 [2024-10-01 13:44:02.382581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.181 [2024-10-01 13:44:02.382615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.181 [2024-10-01 13:44:02.382633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.181 [2024-10-01 13:44:02.382647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.181 [2024-10-01 13:44:02.382679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.181 [2024-10-01 13:44:02.386583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.181 [2024-10-01 13:44:02.386705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.181 [2024-10-01 13:44:02.386758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.181 [2024-10-01 13:44:02.386780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.181 [2024-10-01 13:44:02.386815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.181 [2024-10-01 13:44:02.386848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.181 [2024-10-01 13:44:02.386866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.181 [2024-10-01 13:44:02.386880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.181 [2024-10-01 13:44:02.386912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.181 [2024-10-01 13:44:02.393303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.181 [2024-10-01 13:44:02.393429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.181 [2024-10-01 13:44:02.393475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.181 [2024-10-01 13:44:02.393508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.181 [2024-10-01 13:44:02.393562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.181 [2024-10-01 13:44:02.393599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.181 [2024-10-01 13:44:02.393617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.181 [2024-10-01 13:44:02.393631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.181 [2024-10-01 13:44:02.393663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.181 [2024-10-01 13:44:02.396903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.181 [2024-10-01 13:44:02.397024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.181 [2024-10-01 13:44:02.397057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.181 [2024-10-01 13:44:02.397075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.181 [2024-10-01 13:44:02.397121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.181 [2024-10-01 13:44:02.397158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.181 [2024-10-01 13:44:02.397176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.181 [2024-10-01 13:44:02.397191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.181 [2024-10-01 13:44:02.397230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.181 [2024-10-01 13:44:02.404389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.181 [2024-10-01 13:44:02.404514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.181 [2024-10-01 13:44:02.404569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.181 [2024-10-01 13:44:02.404592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.181 [2024-10-01 13:44:02.404627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.182 [2024-10-01 13:44:02.404685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.182 [2024-10-01 13:44:02.404705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.182 [2024-10-01 13:44:02.404719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.182 [2024-10-01 13:44:02.404752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.182 [2024-10-01 13:44:02.407787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.182 [2024-10-01 13:44:02.407922] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.182 [2024-10-01 13:44:02.407965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.182 [2024-10-01 13:44:02.407985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.182 [2024-10-01 13:44:02.408019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.182 [2024-10-01 13:44:02.408053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.182 [2024-10-01 13:44:02.408072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.182 [2024-10-01 13:44:02.408086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.182 [2024-10-01 13:44:02.408118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.182 [2024-10-01 13:44:02.415503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.182 [2024-10-01 13:44:02.415642] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.182 [2024-10-01 13:44:02.415687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.182 [2024-10-01 13:44:02.415708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.182 [2024-10-01 13:44:02.415742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.182 [2024-10-01 13:44:02.415775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.182 [2024-10-01 13:44:02.415792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.182 [2024-10-01 13:44:02.415806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.182 [2024-10-01 13:44:02.415839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.182 [2024-10-01 13:44:02.418893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.182 [2024-10-01 13:44:02.419024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.182 [2024-10-01 13:44:02.419057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.182 [2024-10-01 13:44:02.419075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.182 [2024-10-01 13:44:02.419108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.182 [2024-10-01 13:44:02.419140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.182 [2024-10-01 13:44:02.419158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.182 [2024-10-01 13:44:02.419173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.182 [2024-10-01 13:44:02.419222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.182 [2024-10-01 13:44:02.425948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.182 [2024-10-01 13:44:02.426071] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.182 [2024-10-01 13:44:02.426105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.182 [2024-10-01 13:44:02.426124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.182 [2024-10-01 13:44:02.426166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.182 [2024-10-01 13:44:02.426206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.182 [2024-10-01 13:44:02.426224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.182 [2024-10-01 13:44:02.426239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.182 [2024-10-01 13:44:02.426271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.182 [2024-10-01 13:44:02.430171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.182 [2024-10-01 13:44:02.430297] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.182 [2024-10-01 13:44:02.430337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.182 [2024-10-01 13:44:02.430357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.182 [2024-10-01 13:44:02.430390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.182 [2024-10-01 13:44:02.430422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.182 [2024-10-01 13:44:02.430440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.182 [2024-10-01 13:44:02.430455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.182 [2024-10-01 13:44:02.430488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.182 [2024-10-01 13:44:02.436800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.182 [2024-10-01 13:44:02.436926] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.182 [2024-10-01 13:44:02.436961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.182 [2024-10-01 13:44:02.436980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.182 [2024-10-01 13:44:02.437014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.182 [2024-10-01 13:44:02.437047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.182 [2024-10-01 13:44:02.437065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.182 [2024-10-01 13:44:02.437079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.182 [2024-10-01 13:44:02.437112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.182 [2024-10-01 13:44:02.440482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.182 [2024-10-01 13:44:02.440625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.182 [2024-10-01 13:44:02.440660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.182 [2024-10-01 13:44:02.440699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.182 [2024-10-01 13:44:02.440736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.182 [2024-10-01 13:44:02.441055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.182 [2024-10-01 13:44:02.441100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.182 [2024-10-01 13:44:02.441119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.182 [2024-10-01 13:44:02.441258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.182 [2024-10-01 13:44:02.447750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.182 [2024-10-01 13:44:02.447895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.182 [2024-10-01 13:44:02.447930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.182 [2024-10-01 13:44:02.447948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.182 [2024-10-01 13:44:02.447983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.182 [2024-10-01 13:44:02.448022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.182 [2024-10-01 13:44:02.448046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.182 [2024-10-01 13:44:02.448060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.182 [2024-10-01 13:44:02.448094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.182 [2024-10-01 13:44:02.451174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.182 [2024-10-01 13:44:02.451295] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.182 [2024-10-01 13:44:02.451328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.182 [2024-10-01 13:44:02.451347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.182 [2024-10-01 13:44:02.451381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.182 [2024-10-01 13:44:02.451413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.182 [2024-10-01 13:44:02.451431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.182 [2024-10-01 13:44:02.451445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.182 [2024-10-01 13:44:02.451476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.182 [2024-10-01 13:44:02.458972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.182 [2024-10-01 13:44:02.459101] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.182 [2024-10-01 13:44:02.459146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.182 [2024-10-01 13:44:02.459165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.182 [2024-10-01 13:44:02.459200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.182 [2024-10-01 13:44:02.459233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.182 [2024-10-01 13:44:02.459273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.182 [2024-10-01 13:44:02.459289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.182 [2024-10-01 13:44:02.459323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.182 [2024-10-01 13:44:02.462171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.182 [2024-10-01 13:44:02.462290] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.182 [2024-10-01 13:44:02.462324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.182 [2024-10-01 13:44:02.462342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.182 [2024-10-01 13:44:02.462376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.182 [2024-10-01 13:44:02.462408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.182 [2024-10-01 13:44:02.462426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.182 [2024-10-01 13:44:02.462440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.182 [2024-10-01 13:44:02.462472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.182 [2024-10-01 13:44:02.469187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.182 [2024-10-01 13:44:02.469312] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.182 [2024-10-01 13:44:02.469346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.182 [2024-10-01 13:44:02.469365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.182 [2024-10-01 13:44:02.469399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.182 [2024-10-01 13:44:02.469432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.182 [2024-10-01 13:44:02.469449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.182 [2024-10-01 13:44:02.469464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.182 [2024-10-01 13:44:02.469496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.182 [2024-10-01 13:44:02.473428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.182 [2024-10-01 13:44:02.473609] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.182 [2024-10-01 13:44:02.473651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.183 [2024-10-01 13:44:02.473673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.183 [2024-10-01 13:44:02.473712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.183 [2024-10-01 13:44:02.473746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.183 [2024-10-01 13:44:02.473764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.183 [2024-10-01 13:44:02.473780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.183 [2024-10-01 13:44:02.473813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.183 [2024-10-01 13:44:02.480184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.183 [2024-10-01 13:44:02.480362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.183 [2024-10-01 13:44:02.480398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.183 [2024-10-01 13:44:02.480418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.183 [2024-10-01 13:44:02.480456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.183 [2024-10-01 13:44:02.480498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.183 [2024-10-01 13:44:02.480517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.183 [2024-10-01 13:44:02.480532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.183 [2024-10-01 13:44:02.481664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.183 [2024-10-01 13:44:02.483814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.183 [2024-10-01 13:44:02.483948] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.183 [2024-10-01 13:44:02.483988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.183 [2024-10-01 13:44:02.484008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.183 [2024-10-01 13:44:02.484042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.183 [2024-10-01 13:44:02.484074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.183 [2024-10-01 13:44:02.484092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.183 [2024-10-01 13:44:02.484106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.183 [2024-10-01 13:44:02.484139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.183 [2024-10-01 13:44:02.491190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.183 [2024-10-01 13:44:02.491312] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.183 [2024-10-01 13:44:02.491345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.183 [2024-10-01 13:44:02.491364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.183 [2024-10-01 13:44:02.491398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.183 [2024-10-01 13:44:02.491430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.183 [2024-10-01 13:44:02.491448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.183 [2024-10-01 13:44:02.491463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.183 [2024-10-01 13:44:02.491495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.183 [2024-10-01 13:44:02.494591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.183 [2024-10-01 13:44:02.494709] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.183 [2024-10-01 13:44:02.494742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.183 [2024-10-01 13:44:02.494760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.183 [2024-10-01 13:44:02.494811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.183 [2024-10-01 13:44:02.494844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.183 [2024-10-01 13:44:02.494882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.183 [2024-10-01 13:44:02.494904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.183 [2024-10-01 13:44:02.494938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.183 [2024-10-01 13:44:02.502372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.183 [2024-10-01 13:44:02.502504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.183 [2024-10-01 13:44:02.502551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.183 [2024-10-01 13:44:02.502573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.183 [2024-10-01 13:44:02.502608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.183 [2024-10-01 13:44:02.502653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.183 [2024-10-01 13:44:02.502673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.183 [2024-10-01 13:44:02.502697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.183 [2024-10-01 13:44:02.502735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.183 [2024-10-01 13:44:02.505654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.183 [2024-10-01 13:44:02.505774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.183 [2024-10-01 13:44:02.505806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.183 [2024-10-01 13:44:02.505825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.183 [2024-10-01 13:44:02.505858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.183 [2024-10-01 13:44:02.505891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.183 [2024-10-01 13:44:02.505909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.183 [2024-10-01 13:44:02.505924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.183 [2024-10-01 13:44:02.505960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.183 [2024-10-01 13:44:02.512591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.183 [2024-10-01 13:44:02.512719] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.183 [2024-10-01 13:44:02.512765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.183 [2024-10-01 13:44:02.512787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.183 [2024-10-01 13:44:02.512823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.183 [2024-10-01 13:44:02.512856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.183 [2024-10-01 13:44:02.512874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.183 [2024-10-01 13:44:02.512909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.183 [2024-10-01 13:44:02.512945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.183 [2024-10-01 13:44:02.516782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.183 [2024-10-01 13:44:02.516911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.183 [2024-10-01 13:44:02.516945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.183 [2024-10-01 13:44:02.516964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.183 [2024-10-01 13:44:02.516998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.183 [2024-10-01 13:44:02.517035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.183 [2024-10-01 13:44:02.517058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.183 [2024-10-01 13:44:02.517072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.183 [2024-10-01 13:44:02.517105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.183 [2024-10-01 13:44:02.523379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.183 [2024-10-01 13:44:02.523565] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.183 [2024-10-01 13:44:02.523601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.183 [2024-10-01 13:44:02.523620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.183 [2024-10-01 13:44:02.523657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.183 [2024-10-01 13:44:02.523691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.183 [2024-10-01 13:44:02.523709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.183 [2024-10-01 13:44:02.523724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.183 [2024-10-01 13:44:02.523759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.183 [2024-10-01 13:44:02.527025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.183 [2024-10-01 13:44:02.527184] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.183 [2024-10-01 13:44:02.527219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.183 [2024-10-01 13:44:02.527239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.183 [2024-10-01 13:44:02.527276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.183 [2024-10-01 13:44:02.527308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.183 [2024-10-01 13:44:02.527326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.183 [2024-10-01 13:44:02.527341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.183 [2024-10-01 13:44:02.527378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.183 [2024-10-01 13:44:02.533527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.183 [2024-10-01 13:44:02.534716] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.183 [2024-10-01 13:44:02.534816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.183 [2024-10-01 13:44:02.534844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.183 [2024-10-01 13:44:02.535077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.183 [2024-10-01 13:44:02.535131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.183 [2024-10-01 13:44:02.535152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.183 [2024-10-01 13:44:02.535168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.183 [2024-10-01 13:44:02.535204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.183 [2024-10-01 13:44:02.538146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.183 [2024-10-01 13:44:02.538315] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.183 [2024-10-01 13:44:02.538349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.183 [2024-10-01 13:44:02.538367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.183 [2024-10-01 13:44:02.538402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.183 [2024-10-01 13:44:02.538443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.183 [2024-10-01 13:44:02.538463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.183 [2024-10-01 13:44:02.538487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.183 [2024-10-01 13:44:02.538525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.183 [2024-10-01 13:44:02.546073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.183 [2024-10-01 13:44:02.546202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.184 [2024-10-01 13:44:02.546238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.184 [2024-10-01 13:44:02.546257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.184 [2024-10-01 13:44:02.546291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.184 [2024-10-01 13:44:02.546323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.184 [2024-10-01 13:44:02.546341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.184 [2024-10-01 13:44:02.546355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.184 [2024-10-01 13:44:02.546388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.184 [2024-10-01 13:44:02.549181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.184 [2024-10-01 13:44:02.549487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.184 [2024-10-01 13:44:02.549548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.184 [2024-10-01 13:44:02.549573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.184 [2024-10-01 13:44:02.549618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.184 [2024-10-01 13:44:02.549681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.184 [2024-10-01 13:44:02.549703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.184 [2024-10-01 13:44:02.549717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.184 [2024-10-01 13:44:02.549751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.184 [2024-10-01 13:44:02.556520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.184 [2024-10-01 13:44:02.556670] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.184 [2024-10-01 13:44:02.556704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.184 [2024-10-01 13:44:02.556723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.184 [2024-10-01 13:44:02.556759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.184 [2024-10-01 13:44:02.556793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.184 [2024-10-01 13:44:02.556820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.184 [2024-10-01 13:44:02.556835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.184 [2024-10-01 13:44:02.556869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.184 [2024-10-01 13:44:02.560713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.184 [2024-10-01 13:44:02.560845] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.184 [2024-10-01 13:44:02.560879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.184 [2024-10-01 13:44:02.560897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.184 [2024-10-01 13:44:02.560931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.184 [2024-10-01 13:44:02.560963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.184 [2024-10-01 13:44:02.560981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.184 [2024-10-01 13:44:02.560996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.184 [2024-10-01 13:44:02.561028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.184 [2024-10-01 13:44:02.567414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.184 [2024-10-01 13:44:02.567565] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.184 [2024-10-01 13:44:02.567601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.184 [2024-10-01 13:44:02.567629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.184 [2024-10-01 13:44:02.567669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.184 [2024-10-01 13:44:02.567713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.184 [2024-10-01 13:44:02.567732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.184 [2024-10-01 13:44:02.567746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.184 [2024-10-01 13:44:02.567802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.184 [2024-10-01 13:44:02.571080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.184 [2024-10-01 13:44:02.571208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.184 [2024-10-01 13:44:02.571242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.184 [2024-10-01 13:44:02.571261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.184 [2024-10-01 13:44:02.571302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.184 [2024-10-01 13:44:02.571335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.184 [2024-10-01 13:44:02.571353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.184 [2024-10-01 13:44:02.571367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.184 [2024-10-01 13:44:02.571399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.184 [2024-10-01 13:44:02.578558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.184 [2024-10-01 13:44:02.578705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.184 [2024-10-01 13:44:02.578743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.184 [2024-10-01 13:44:02.578761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.184 [2024-10-01 13:44:02.578797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.184 [2024-10-01 13:44:02.578830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.184 [2024-10-01 13:44:02.578848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.184 [2024-10-01 13:44:02.578863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.184 [2024-10-01 13:44:02.578896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.184 [2024-10-01 13:44:02.581965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.184 [2024-10-01 13:44:02.582092] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.184 [2024-10-01 13:44:02.582132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.184 [2024-10-01 13:44:02.582151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.184 [2024-10-01 13:44:02.582185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.184 [2024-10-01 13:44:02.582217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.184 [2024-10-01 13:44:02.582235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.184 [2024-10-01 13:44:02.582249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.184 [2024-10-01 13:44:02.582282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.184 [2024-10-01 13:44:02.589788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.184 [2024-10-01 13:44:02.589915] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.184 [2024-10-01 13:44:02.589956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.184 [2024-10-01 13:44:02.590002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.184 [2024-10-01 13:44:02.590039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.184 [2024-10-01 13:44:02.590072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.184 [2024-10-01 13:44:02.590090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.184 [2024-10-01 13:44:02.590104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.184 [2024-10-01 13:44:02.590137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.184 [2024-10-01 13:44:02.593171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.184 [2024-10-01 13:44:02.593293] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.184 [2024-10-01 13:44:02.593327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.184 [2024-10-01 13:44:02.593346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.184 [2024-10-01 13:44:02.593380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.184 [2024-10-01 13:44:02.593412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.184 [2024-10-01 13:44:02.593430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.184 [2024-10-01 13:44:02.593445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.184 [2024-10-01 13:44:02.593483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.184 [2024-10-01 13:44:02.600176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.184 [2024-10-01 13:44:02.600340] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.184 [2024-10-01 13:44:02.600400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.184 [2024-10-01 13:44:02.600436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.184 [2024-10-01 13:44:02.600491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.184 [2024-10-01 13:44:02.600562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.184 [2024-10-01 13:44:02.600595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.184 [2024-10-01 13:44:02.600622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.184 [2024-10-01 13:44:02.600961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.184 [2024-10-01 13:44:02.604416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.184 [2024-10-01 13:44:02.604582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.184 [2024-10-01 13:44:02.604640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.184 [2024-10-01 13:44:02.604676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.184 [2024-10-01 13:44:02.604731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.184 [2024-10-01 13:44:02.604782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.184 [2024-10-01 13:44:02.604841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.184 [2024-10-01 13:44:02.604868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.184 [2024-10-01 13:44:02.604920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.184 [2024-10-01 13:44:02.611198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.184 [2024-10-01 13:44:02.611347] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.184 [2024-10-01 13:44:02.611404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.184 [2024-10-01 13:44:02.611439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.184 [2024-10-01 13:44:02.611494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.184 [2024-10-01 13:44:02.611567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.184 [2024-10-01 13:44:02.611602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.184 [2024-10-01 13:44:02.611626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.184 [2024-10-01 13:44:02.612805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.184 [2024-10-01 13:44:02.614863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.184 [2024-10-01 13:44:02.615010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.184 [2024-10-01 13:44:02.615068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.184 [2024-10-01 13:44:02.615102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.184 [2024-10-01 13:44:02.615157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.184 [2024-10-01 13:44:02.615209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.184 [2024-10-01 13:44:02.615239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.184 [2024-10-01 13:44:02.615264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.184 [2024-10-01 13:44:02.615608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.184 [2024-10-01 13:44:02.622362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.184 [2024-10-01 13:44:02.622514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.184 [2024-10-01 13:44:02.622587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.184 [2024-10-01 13:44:02.622623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.184 [2024-10-01 13:44:02.622679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.184 [2024-10-01 13:44:02.622733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.185 [2024-10-01 13:44:02.622764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.185 [2024-10-01 13:44:02.622790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.185 [2024-10-01 13:44:02.622841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.185 [2024-10-01 13:44:02.625749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.185 [2024-10-01 13:44:02.625898] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.185 [2024-10-01 13:44:02.625955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.185 [2024-10-01 13:44:02.625990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.185 [2024-10-01 13:44:02.626044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.185 [2024-10-01 13:44:02.626097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.185 [2024-10-01 13:44:02.626128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.185 [2024-10-01 13:44:02.626152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.185 [2024-10-01 13:44:02.627309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.185 [2024-10-01 13:44:02.633487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.185 [2024-10-01 13:44:02.633703] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.185 [2024-10-01 13:44:02.633780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.185 [2024-10-01 13:44:02.633817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.185 [2024-10-01 13:44:02.633876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.185 [2024-10-01 13:44:02.633928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.185 [2024-10-01 13:44:02.633959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.185 [2024-10-01 13:44:02.633985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.185 [2024-10-01 13:44:02.634038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.185 [2024-10-01 13:44:02.636936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.185 [2024-10-01 13:44:02.637107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.185 [2024-10-01 13:44:02.637166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.185 [2024-10-01 13:44:02.637203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.185 [2024-10-01 13:44:02.637258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.185 [2024-10-01 13:44:02.637310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.185 [2024-10-01 13:44:02.637342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.185 [2024-10-01 13:44:02.637366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.185 [2024-10-01 13:44:02.637418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.185 [2024-10-01 13:44:02.644041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.185 [2024-10-01 13:44:02.644279] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.185 [2024-10-01 13:44:02.644331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.185 [2024-10-01 13:44:02.644365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.185 [2024-10-01 13:44:02.644479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.185 [2024-10-01 13:44:02.644827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.185 [2024-10-01 13:44:02.644871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.185 [2024-10-01 13:44:02.644903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.185 [2024-10-01 13:44:02.645135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.185 [2024-10-01 13:44:02.648399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.185 [2024-10-01 13:44:02.648587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.185 [2024-10-01 13:44:02.648643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.185 [2024-10-01 13:44:02.648679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.185 [2024-10-01 13:44:02.648733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.185 [2024-10-01 13:44:02.648786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.185 [2024-10-01 13:44:02.648818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.185 [2024-10-01 13:44:02.648843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.185 [2024-10-01 13:44:02.648893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.185 [2024-10-01 13:44:02.654504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.185 [2024-10-01 13:44:02.654668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.185 [2024-10-01 13:44:02.654714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.185 [2024-10-01 13:44:02.654747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.185 [2024-10-01 13:44:02.655570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.185 [2024-10-01 13:44:02.655805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.185 [2024-10-01 13:44:02.655848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.185 [2024-10-01 13:44:02.655890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.185 [2024-10-01 13:44:02.656018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.185 [2024-10-01 13:44:02.659249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.185 [2024-10-01 13:44:02.659400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.185 [2024-10-01 13:44:02.659456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.185 [2024-10-01 13:44:02.659492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.185 [2024-10-01 13:44:02.659565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.185 [2024-10-01 13:44:02.659621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.185 [2024-10-01 13:44:02.659652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.185 [2024-10-01 13:44:02.659711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.185 [2024-10-01 13:44:02.659768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.185 8565.90 IOPS, 33.46 MiB/s [2024-10-01 13:44:02.668108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.185 [2024-10-01 13:44:02.669667] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.185 [2024-10-01 13:44:02.669723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.185 [2024-10-01 13:44:02.669756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.185 [2024-10-01 13:44:02.670078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.185 [2024-10-01 13:44:02.670917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.185 [2024-10-01 13:44:02.670983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.185 [2024-10-01 13:44:02.671018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.185 [2024-10-01 13:44:02.671043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.185 [2024-10-01 13:44:02.672367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.185 [2024-10-01 13:44:02.672480] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.185 [2024-10-01 13:44:02.672525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.185 [2024-10-01 13:44:02.672574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.185 [2024-10-01 13:44:02.672829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.185 [2024-10-01 13:44:02.673959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.185 [2024-10-01 13:44:02.674002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.185 [2024-10-01 13:44:02.674033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.185 [2024-10-01 13:44:02.674747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.185 [2024-10-01 13:44:02.678665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.185 [2024-10-01 13:44:02.678815] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.185 [2024-10-01 13:44:02.678863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.185 [2024-10-01 13:44:02.678895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.185 [2024-10-01 13:44:02.678950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.185 [2024-10-01 13:44:02.679004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.185 [2024-10-01 13:44:02.679034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.185 [2024-10-01 13:44:02.679060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.185 [2024-10-01 13:44:02.679111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.185 [2024-10-01 13:44:02.682090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.185 [2024-10-01 13:44:02.682267] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.185 [2024-10-01 13:44:02.682324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.185 [2024-10-01 13:44:02.682359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.185 [2024-10-01 13:44:02.682413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.185 [2024-10-01 13:44:02.682466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.185 [2024-10-01 13:44:02.682497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.185 [2024-10-01 13:44:02.682522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.185 [2024-10-01 13:44:02.682595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.185 [2024-10-01 13:44:02.689215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.185 [2024-10-01 13:44:02.689367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.185 [2024-10-01 13:44:02.689426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.185 [2024-10-01 13:44:02.689463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.185 [2024-10-01 13:44:02.689517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.185 [2024-10-01 13:44:02.689591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.185 [2024-10-01 13:44:02.689623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.185 [2024-10-01 13:44:02.689648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.185 [2024-10-01 13:44:02.689981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.185 [2024-10-01 13:44:02.693419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.185 [2024-10-01 13:44:02.693583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.185 [2024-10-01 13:44:02.693642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.185 [2024-10-01 13:44:02.693678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.185 [2024-10-01 13:44:02.693733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.185 [2024-10-01 13:44:02.693785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.185 [2024-10-01 13:44:02.693816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.185 [2024-10-01 13:44:02.693840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.185 [2024-10-01 13:44:02.693891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.185 [2024-10-01 13:44:02.699345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.185 [2024-10-01 13:44:02.700250] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.700304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.186 [2024-10-01 13:44:02.700337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.700580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.700726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.186 [2024-10-01 13:44:02.700772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.186 [2024-10-01 13:44:02.700804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.186 [2024-10-01 13:44:02.701953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.186 [2024-10-01 13:44:02.704029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.186 [2024-10-01 13:44:02.704179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.704231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.186 [2024-10-01 13:44:02.704267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.704321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.704374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.186 [2024-10-01 13:44:02.704405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.186 [2024-10-01 13:44:02.704430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.186 [2024-10-01 13:44:02.704776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.186 [2024-10-01 13:44:02.710087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.186 [2024-10-01 13:44:02.710237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.710295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.186 [2024-10-01 13:44:02.710330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.710385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.710438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.186 [2024-10-01 13:44:02.710468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.186 [2024-10-01 13:44:02.710493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.186 [2024-10-01 13:44:02.711499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.186 [2024-10-01 13:44:02.714865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.186 [2024-10-01 13:44:02.715166] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.715219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.186 [2024-10-01 13:44:02.715253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.715368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.715424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.186 [2024-10-01 13:44:02.715456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.186 [2024-10-01 13:44:02.715481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.186 [2024-10-01 13:44:02.716668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.186 [2024-10-01 13:44:02.720192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.186 [2024-10-01 13:44:02.720339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.720396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.186 [2024-10-01 13:44:02.720431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.720485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.720554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.186 [2024-10-01 13:44:02.720588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.186 [2024-10-01 13:44:02.720613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.186 [2024-10-01 13:44:02.720665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.186 [2024-10-01 13:44:02.724974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.186 [2024-10-01 13:44:02.725123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.725181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.186 [2024-10-01 13:44:02.725215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.726204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.726486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.186 [2024-10-01 13:44:02.726529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.186 [2024-10-01 13:44:02.726580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.186 [2024-10-01 13:44:02.726647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.186 [2024-10-01 13:44:02.731515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.186 [2024-10-01 13:44:02.731692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.731746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.186 [2024-10-01 13:44:02.731780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.732914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.733610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.186 [2024-10-01 13:44:02.733654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.186 [2024-10-01 13:44:02.733684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.186 [2024-10-01 13:44:02.733794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.186 [2024-10-01 13:44:02.735084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.186 [2024-10-01 13:44:02.735247] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.735304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.186 [2024-10-01 13:44:02.735366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.736766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.737768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.186 [2024-10-01 13:44:02.737813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.186 [2024-10-01 13:44:02.737844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.186 [2024-10-01 13:44:02.738008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.186 [2024-10-01 13:44:02.741643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.186 [2024-10-01 13:44:02.741795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.741852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.186 [2024-10-01 13:44:02.741887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.741941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.742015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.186 [2024-10-01 13:44:02.742048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.186 [2024-10-01 13:44:02.742073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.186 [2024-10-01 13:44:02.742125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.186 [2024-10-01 13:44:02.746241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.186 [2024-10-01 13:44:02.746391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.746448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.186 [2024-10-01 13:44:02.746483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.747623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.748354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.186 [2024-10-01 13:44:02.748400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.186 [2024-10-01 13:44:02.748430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.186 [2024-10-01 13:44:02.748570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.186 [2024-10-01 13:44:02.752405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.186 [2024-10-01 13:44:02.752570] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.752627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.186 [2024-10-01 13:44:02.752662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.752717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.752770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.186 [2024-10-01 13:44:02.752831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.186 [2024-10-01 13:44:02.752856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.186 [2024-10-01 13:44:02.752909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.186 [2024-10-01 13:44:02.756346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.186 [2024-10-01 13:44:02.756483] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.756518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.186 [2024-10-01 13:44:02.756555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.756594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.756627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.186 [2024-10-01 13:44:02.756646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.186 [2024-10-01 13:44:02.756661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.186 [2024-10-01 13:44:02.756693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.186 [2024-10-01 13:44:02.762778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.186 [2024-10-01 13:44:02.762901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.762935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.186 [2024-10-01 13:44:02.762954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.762986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.763018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.186 [2024-10-01 13:44:02.763035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.186 [2024-10-01 13:44:02.763050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.186 [2024-10-01 13:44:02.763082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.186 [2024-10-01 13:44:02.766960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.186 [2024-10-01 13:44:02.767081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.767114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.186 [2024-10-01 13:44:02.767132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.767165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.767198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.186 [2024-10-01 13:44:02.767216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.186 [2024-10-01 13:44:02.767230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.186 [2024-10-01 13:44:02.767262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.186 [2024-10-01 13:44:02.773709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.186 [2024-10-01 13:44:02.773864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.773924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.186 [2024-10-01 13:44:02.773960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.774014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.774067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.186 [2024-10-01 13:44:02.774097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.186 [2024-10-01 13:44:02.774123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.186 [2024-10-01 13:44:02.775327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.186 [2024-10-01 13:44:02.777384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.186 [2024-10-01 13:44:02.777548] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.186 [2024-10-01 13:44:02.777612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.186 [2024-10-01 13:44:02.777647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.186 [2024-10-01 13:44:02.777702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.186 [2024-10-01 13:44:02.777755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.777786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.777811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.778141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.784870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.785020] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.785068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.187 [2024-10-01 13:44:02.785102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.785155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.785208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.785238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.785264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.785314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.788264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.788413] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.788470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.187 [2024-10-01 13:44:02.788504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.788608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.789770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.789814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.789845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.790132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.795974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.796122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.796178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.187 [2024-10-01 13:44:02.796212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.796266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.796320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.796350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.796374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.796426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.799260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.799407] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.799463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.187 [2024-10-01 13:44:02.799498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.799568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.799625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.799656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.799681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.799731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.806188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.806337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.806393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.187 [2024-10-01 13:44:02.806428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.806482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.806554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.806589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.806642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.806967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.810333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.810481] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.810550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.187 [2024-10-01 13:44:02.810587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.810641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.810693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.810724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.810753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.810803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.816973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.817124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.817181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.187 [2024-10-01 13:44:02.817222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.818384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.818720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.818763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.818791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.819955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.820772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.820921] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.820975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.187 [2024-10-01 13:44:02.821009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.821328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.821529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.821593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.821620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.821773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.828198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.828385] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.828441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.187 [2024-10-01 13:44:02.828473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.828530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.828607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.828635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.828658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.828711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.832923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.833087] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.833144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.187 [2024-10-01 13:44:02.833180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.834319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.835014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.835059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.835091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.835200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.839047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.839196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.839252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.187 [2024-10-01 13:44:02.839287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.839341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.839395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.839425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.839449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.839499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.843032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.843179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.843237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.187 [2024-10-01 13:44:02.843271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.843325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.844636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.844681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.844713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.844976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.849388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.849552] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.849609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.187 [2024-10-01 13:44:02.849645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.849701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.849754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.849786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.849811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.849861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.853689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.853838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.853890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.187 [2024-10-01 13:44:02.853925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.853979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.854031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.854062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.854088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.854138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.860373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.860596] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.860650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.187 [2024-10-01 13:44:02.860683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.860740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.860794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.187 [2024-10-01 13:44:02.860825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.187 [2024-10-01 13:44:02.860849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.187 [2024-10-01 13:44:02.862035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.187 [2024-10-01 13:44:02.864103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.187 [2024-10-01 13:44:02.864252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.187 [2024-10-01 13:44:02.864309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.187 [2024-10-01 13:44:02.864344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.187 [2024-10-01 13:44:02.864398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.187 [2024-10-01 13:44:02.864451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.188 [2024-10-01 13:44:02.864483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.188 [2024-10-01 13:44:02.864508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.188 [2024-10-01 13:44:02.864849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.188 [2024-10-01 13:44:02.871418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.188 [2024-10-01 13:44:02.871895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.188 [2024-10-01 13:44:02.871954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.188 [2024-10-01 13:44:02.871990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.188 [2024-10-01 13:44:02.872062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.188 [2024-10-01 13:44:02.872120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.188 [2024-10-01 13:44:02.872152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.188 [2024-10-01 13:44:02.872180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.188 [2024-10-01 13:44:02.872262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.188 [2024-10-01 13:44:02.875142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.188 [2024-10-01 13:44:02.875293] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.188 [2024-10-01 13:44:02.875352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.188 [2024-10-01 13:44:02.875388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.188 [2024-10-01 13:44:02.875443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.188 [2024-10-01 13:44:02.875496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.188 [2024-10-01 13:44:02.875527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.188 [2024-10-01 13:44:02.875575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.188 [2024-10-01 13:44:02.876757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.188 [2024-10-01 13:44:02.882902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.188 [2024-10-01 13:44:02.883079] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.188 [2024-10-01 13:44:02.883138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.188 [2024-10-01 13:44:02.883233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.188 [2024-10-01 13:44:02.883296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.188 [2024-10-01 13:44:02.883353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.188 [2024-10-01 13:44:02.883384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.188 [2024-10-01 13:44:02.883409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.188 [2024-10-01 13:44:02.883462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.188 [2024-10-01 13:44:02.886350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.188 [2024-10-01 13:44:02.886508] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.188 [2024-10-01 13:44:02.886569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.188 [2024-10-01 13:44:02.886603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.188 [2024-10-01 13:44:02.886659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.188 [2024-10-01 13:44:02.886713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.188 [2024-10-01 13:44:02.886744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.188 [2024-10-01 13:44:02.886769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.188 [2024-10-01 13:44:02.886822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.188 [2024-10-01 13:44:02.893568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.188 [2024-10-01 13:44:02.893734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.188 [2024-10-01 13:44:02.893793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.188 [2024-10-01 13:44:02.893829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.188 [2024-10-01 13:44:02.893885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.188 [2024-10-01 13:44:02.893938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.188 [2024-10-01 13:44:02.893969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.188 [2024-10-01 13:44:02.893994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.188 [2024-10-01 13:44:02.894333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.188 [2024-10-01 13:44:02.897806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.188 [2024-10-01 13:44:02.897958] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.188 [2024-10-01 13:44:02.898015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.188 [2024-10-01 13:44:02.898050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.188 [2024-10-01 13:44:02.898104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.188 [2024-10-01 13:44:02.898156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.188 [2024-10-01 13:44:02.898220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.188 [2024-10-01 13:44:02.898246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.188 [2024-10-01 13:44:02.898299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.188 [2024-10-01 13:44:02.903751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.188 [2024-10-01 13:44:02.904704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.188 [2024-10-01 13:44:02.904767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.188 [2024-10-01 13:44:02.904796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.188 [2024-10-01 13:44:02.904991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.188 [2024-10-01 13:44:02.905095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.188 [2024-10-01 13:44:02.905139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.188 [2024-10-01 13:44:02.905158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.188 [2024-10-01 13:44:02.906305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.188 [2024-10-01 13:44:02.908442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.188 [2024-10-01 13:44:02.908588] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.188 [2024-10-01 13:44:02.908632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.188 [2024-10-01 13:44:02.908655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.188 [2024-10-01 13:44:02.908702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.188 [2024-10-01 13:44:02.908737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.188 [2024-10-01 13:44:02.908756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.189 [2024-10-01 13:44:02.908771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.189 [2024-10-01 13:44:02.908804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.189 [2024-10-01 13:44:02.914444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.189 [2024-10-01 13:44:02.914581] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.189 [2024-10-01 13:44:02.914625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.189 [2024-10-01 13:44:02.914646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.189 [2024-10-01 13:44:02.914682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.189 [2024-10-01 13:44:02.914715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.189 [2024-10-01 13:44:02.914732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.189 [2024-10-01 13:44:02.914747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.189 [2024-10-01 13:44:02.914780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.189 [2024-10-01 13:44:02.919404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.189 [2024-10-01 13:44:02.919562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.189 [2024-10-01 13:44:02.919598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.189 [2024-10-01 13:44:02.919617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.189 [2024-10-01 13:44:02.919652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.189 [2024-10-01 13:44:02.919685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.189 [2024-10-01 13:44:02.919703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.189 [2024-10-01 13:44:02.919718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.189 [2024-10-01 13:44:02.919750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.189 [2024-10-01 13:44:02.924579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.189 [2024-10-01 13:44:02.924771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.189 [2024-10-01 13:44:02.924820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.189 [2024-10-01 13:44:02.924843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.189 [2024-10-01 13:44:02.924881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.189 [2024-10-01 13:44:02.924916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.189 [2024-10-01 13:44:02.924940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.189 [2024-10-01 13:44:02.924957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.189 [2024-10-01 13:44:02.926339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.189 [2024-10-01 13:44:02.929515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.189 [2024-10-01 13:44:02.929701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.189 [2024-10-01 13:44:02.929738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.189 [2024-10-01 13:44:02.929757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.189 [2024-10-01 13:44:02.930743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.189 [2024-10-01 13:44:02.931014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.189 [2024-10-01 13:44:02.931054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.189 [2024-10-01 13:44:02.931074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.189 [2024-10-01 13:44:02.931120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.189 [2024-10-01 13:44:02.934723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.189 [2024-10-01 13:44:02.934876] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.189 [2024-10-01 13:44:02.934911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.189 [2024-10-01 13:44:02.934930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.189 [2024-10-01 13:44:02.936135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.189 [2024-10-01 13:44:02.936388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.189 [2024-10-01 13:44:02.936426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.189 [2024-10-01 13:44:02.936445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.189 [2024-10-01 13:44:02.937577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.189 [2024-10-01 13:44:02.939644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.189 [2024-10-01 13:44:02.939763] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.189 [2024-10-01 13:44:02.939796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.189 [2024-10-01 13:44:02.939814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.189 [2024-10-01 13:44:02.939848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.189 [2024-10-01 13:44:02.939895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.189 [2024-10-01 13:44:02.939917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.189 [2024-10-01 13:44:02.939931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.189 [2024-10-01 13:44:02.939964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.189 [2024-10-01 13:44:02.945598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.189 [2024-10-01 13:44:02.945720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.189 [2024-10-01 13:44:02.945754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.189 [2024-10-01 13:44:02.945772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.189 [2024-10-01 13:44:02.945805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.189 [2024-10-01 13:44:02.945838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.189 [2024-10-01 13:44:02.945855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.189 [2024-10-01 13:44:02.945870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.189 [2024-10-01 13:44:02.945902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.189 [2024-10-01 13:44:02.950740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.189 [2024-10-01 13:44:02.950863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.189 [2024-10-01 13:44:02.950896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.189 [2024-10-01 13:44:02.950915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.189 [2024-10-01 13:44:02.952035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.189 [2024-10-01 13:44:02.952751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.189 [2024-10-01 13:44:02.952795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.189 [2024-10-01 13:44:02.952837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.189 [2024-10-01 13:44:02.952937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.189 [2024-10-01 13:44:02.956769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.189 [2024-10-01 13:44:02.956903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.189 [2024-10-01 13:44:02.956936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.190 [2024-10-01 13:44:02.956956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.190 [2024-10-01 13:44:02.956990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.190 [2024-10-01 13:44:02.957021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.190 [2024-10-01 13:44:02.957039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.190 [2024-10-01 13:44:02.957054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.190 [2024-10-01 13:44:02.957086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.190 [2024-10-01 13:44:02.960833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.190 [2024-10-01 13:44:02.960952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.190 [2024-10-01 13:44:02.960985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.190 [2024-10-01 13:44:02.961011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.190 [2024-10-01 13:44:02.961049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.190 [2024-10-01 13:44:02.961081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.190 [2024-10-01 13:44:02.961098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.190 [2024-10-01 13:44:02.961113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.190 [2024-10-01 13:44:02.962355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.190 [2024-10-01 13:44:02.967083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.190 [2024-10-01 13:44:02.967207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.190 [2024-10-01 13:44:02.967240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.190 [2024-10-01 13:44:02.967260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.190 [2024-10-01 13:44:02.967294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.190 [2024-10-01 13:44:02.967327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.190 [2024-10-01 13:44:02.967344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.190 [2024-10-01 13:44:02.967358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.190 [2024-10-01 13:44:02.967390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.190 [2024-10-01 13:44:02.971319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.190 [2024-10-01 13:44:02.971463] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.190 [2024-10-01 13:44:02.971497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.190 [2024-10-01 13:44:02.971516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.190 [2024-10-01 13:44:02.971567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.190 [2024-10-01 13:44:02.971603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.190 [2024-10-01 13:44:02.971621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.190 [2024-10-01 13:44:02.971636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.190 [2024-10-01 13:44:02.971669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.190 [2024-10-01 13:44:02.977928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.190 [2024-10-01 13:44:02.978051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.190 [2024-10-01 13:44:02.978085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.190 [2024-10-01 13:44:02.978115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.190 [2024-10-01 13:44:02.978165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.190 [2024-10-01 13:44:02.978201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.190 [2024-10-01 13:44:02.978219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.190 [2024-10-01 13:44:02.978233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.190 [2024-10-01 13:44:02.978266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.190 [2024-10-01 13:44:02.981452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.190 [2024-10-01 13:44:02.981583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.190 [2024-10-01 13:44:02.981617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.190 [2024-10-01 13:44:02.981636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.190 [2024-10-01 13:44:02.981670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.190 [2024-10-01 13:44:02.981702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.190 [2024-10-01 13:44:02.981722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.190 [2024-10-01 13:44:02.981736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.190 [2024-10-01 13:44:02.981778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.190 [2024-10-01 13:44:02.988918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.190 [2024-10-01 13:44:02.989040] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.190 [2024-10-01 13:44:02.989074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.190 [2024-10-01 13:44:02.989092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.190 [2024-10-01 13:44:02.989127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.190 [2024-10-01 13:44:02.989183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.190 [2024-10-01 13:44:02.989203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.190 [2024-10-01 13:44:02.989217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.190 [2024-10-01 13:44:02.989249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.190 [2024-10-01 13:44:02.992263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.190 [2024-10-01 13:44:02.992387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.190 [2024-10-01 13:44:02.992421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.190 [2024-10-01 13:44:02.992439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.190 [2024-10-01 13:44:02.992473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.190 [2024-10-01 13:44:02.992506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.190 [2024-10-01 13:44:02.992523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.190 [2024-10-01 13:44:02.992554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.190 [2024-10-01 13:44:02.992590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.190 [2024-10-01 13:44:03.000114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.190 [2024-10-01 13:44:03.000270] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.190 [2024-10-01 13:44:03.000306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.190 [2024-10-01 13:44:03.000325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.190 [2024-10-01 13:44:03.000360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.190 [2024-10-01 13:44:03.000393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.190 [2024-10-01 13:44:03.000411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.190 [2024-10-01 13:44:03.000429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.190 [2024-10-01 13:44:03.000471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.190 [2024-10-01 13:44:03.003381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.190 [2024-10-01 13:44:03.003500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.190 [2024-10-01 13:44:03.003552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.190 [2024-10-01 13:44:03.003578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.190 [2024-10-01 13:44:03.003613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.190 [2024-10-01 13:44:03.003647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.190 [2024-10-01 13:44:03.003665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.190 [2024-10-01 13:44:03.003680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.190 [2024-10-01 13:44:03.003736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.190 [2024-10-01 13:44:03.010290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.190 [2024-10-01 13:44:03.010413] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.190 [2024-10-01 13:44:03.010447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.190 [2024-10-01 13:44:03.010466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.191 [2024-10-01 13:44:03.010500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.191 [2024-10-01 13:44:03.010547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.191 [2024-10-01 13:44:03.010569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.191 [2024-10-01 13:44:03.010584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.191 [2024-10-01 13:44:03.010636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.191 [2024-10-01 13:44:03.014480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.191 [2024-10-01 13:44:03.014614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.191 [2024-10-01 13:44:03.014648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.191 [2024-10-01 13:44:03.014667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.191 [2024-10-01 13:44:03.014717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.191 [2024-10-01 13:44:03.014754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.191 [2024-10-01 13:44:03.014772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.191 [2024-10-01 13:44:03.014787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.191 [2024-10-01 13:44:03.014818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.191 [2024-10-01 13:44:03.021238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.191 [2024-10-01 13:44:03.021365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.191 [2024-10-01 13:44:03.021399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.191 [2024-10-01 13:44:03.021418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.191 [2024-10-01 13:44:03.021452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.191 [2024-10-01 13:44:03.021485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.191 [2024-10-01 13:44:03.021502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.191 [2024-10-01 13:44:03.021516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.191 [2024-10-01 13:44:03.021567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.191 [2024-10-01 13:44:03.025403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.191 [2024-10-01 13:44:03.025524] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.191 [2024-10-01 13:44:03.025574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.191 [2024-10-01 13:44:03.025616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.191 [2024-10-01 13:44:03.025653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.191 [2024-10-01 13:44:03.025686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.191 [2024-10-01 13:44:03.025705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.191 [2024-10-01 13:44:03.025719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.191 [2024-10-01 13:44:03.025751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.191 [2024-10-01 13:44:03.031361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.191 [2024-10-01 13:44:03.031565] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.191 [2024-10-01 13:44:03.031602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.191 [2024-10-01 13:44:03.031622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.191 [2024-10-01 13:44:03.031660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.191 [2024-10-01 13:44:03.031693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.191 [2024-10-01 13:44:03.031712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.191 [2024-10-01 13:44:03.031728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.191 [2024-10-01 13:44:03.031761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.191 [2024-10-01 13:44:03.035553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.191 [2024-10-01 13:44:03.035720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.191 [2024-10-01 13:44:03.035756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.191 [2024-10-01 13:44:03.035775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.191 [2024-10-01 13:44:03.037054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.191 [2024-10-01 13:44:03.037278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.191 [2024-10-01 13:44:03.037312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.191 [2024-10-01 13:44:03.037330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.191 [2024-10-01 13:44:03.037367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.191 [2024-10-01 13:44:03.042033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.191 [2024-10-01 13:44:03.042211] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.191 [2024-10-01 13:44:03.042248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.191 [2024-10-01 13:44:03.042268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.191 [2024-10-01 13:44:03.042305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.191 [2024-10-01 13:44:03.042355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.191 [2024-10-01 13:44:03.042400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.191 [2024-10-01 13:44:03.042417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.191 [2024-10-01 13:44:03.042452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.191 [2024-10-01 13:44:03.046314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.191 [2024-10-01 13:44:03.046448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.191 [2024-10-01 13:44:03.046482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.191 [2024-10-01 13:44:03.046501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.191 [2024-10-01 13:44:03.046551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.191 [2024-10-01 13:44:03.046588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.191 [2024-10-01 13:44:03.046607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.191 [2024-10-01 13:44:03.046622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.191 [2024-10-01 13:44:03.046654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.191 [2024-10-01 13:44:03.052929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.191 [2024-10-01 13:44:03.053069] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.191 [2024-10-01 13:44:03.053103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.191 [2024-10-01 13:44:03.053121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.191 [2024-10-01 13:44:03.053156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.191 [2024-10-01 13:44:03.053188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.191 [2024-10-01 13:44:03.053215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.191 [2024-10-01 13:44:03.053234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.191 [2024-10-01 13:44:03.053268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.191 [2024-10-01 13:44:03.056491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.191 [2024-10-01 13:44:03.056635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.191 [2024-10-01 13:44:03.056673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.191 [2024-10-01 13:44:03.056692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.191 [2024-10-01 13:44:03.056729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.191 [2024-10-01 13:44:03.056783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.191 [2024-10-01 13:44:03.056805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.191 [2024-10-01 13:44:03.056820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.191 [2024-10-01 13:44:03.056853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.191 [2024-10-01 13:44:03.063862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.191 [2024-10-01 13:44:03.064000] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.191 [2024-10-01 13:44:03.064034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.191 [2024-10-01 13:44:03.064054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.191 [2024-10-01 13:44:03.064100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.191 [2024-10-01 13:44:03.064135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.191 [2024-10-01 13:44:03.064157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.192 [2024-10-01 13:44:03.064180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.192 [2024-10-01 13:44:03.064216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.192 [2024-10-01 13:44:03.067168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.192 [2024-10-01 13:44:03.067301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.192 [2024-10-01 13:44:03.067335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.192 [2024-10-01 13:44:03.067355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.192 [2024-10-01 13:44:03.067400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.192 [2024-10-01 13:44:03.067434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.192 [2024-10-01 13:44:03.067452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.192 [2024-10-01 13:44:03.067466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.192 [2024-10-01 13:44:03.067498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.192 [2024-10-01 13:44:03.074877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.192 [2024-10-01 13:44:03.075009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.192 [2024-10-01 13:44:03.075052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.192 [2024-10-01 13:44:03.075074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.192 [2024-10-01 13:44:03.075109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.192 [2024-10-01 13:44:03.075144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.192 [2024-10-01 13:44:03.075171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.192 [2024-10-01 13:44:03.075187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.192 [2024-10-01 13:44:03.075221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.192 [2024-10-01 13:44:03.078057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.192 [2024-10-01 13:44:03.078202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.192 [2024-10-01 13:44:03.078237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.192 [2024-10-01 13:44:03.078256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.192 [2024-10-01 13:44:03.078325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.192 [2024-10-01 13:44:03.078359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.192 [2024-10-01 13:44:03.078377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.192 [2024-10-01 13:44:03.078392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.192 [2024-10-01 13:44:03.078424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.192 [2024-10-01 13:44:03.084995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.192 [2024-10-01 13:44:03.085124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.192 [2024-10-01 13:44:03.085157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.192 [2024-10-01 13:44:03.085176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.192 [2024-10-01 13:44:03.085210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.192 [2024-10-01 13:44:03.085244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.192 [2024-10-01 13:44:03.085261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.192 [2024-10-01 13:44:03.085276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.192 [2024-10-01 13:44:03.085308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.192 [2024-10-01 13:44:03.089099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.192 [2024-10-01 13:44:03.089222] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.192 [2024-10-01 13:44:03.089255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.192 [2024-10-01 13:44:03.089274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.192 [2024-10-01 13:44:03.089308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.192 [2024-10-01 13:44:03.089340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.192 [2024-10-01 13:44:03.089359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.192 [2024-10-01 13:44:03.089374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.192 [2024-10-01 13:44:03.089406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.192 [2024-10-01 13:44:03.095547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.192 [2024-10-01 13:44:03.095677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.192 [2024-10-01 13:44:03.095713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.192 [2024-10-01 13:44:03.095732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.192 [2024-10-01 13:44:03.095767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.192 [2024-10-01 13:44:03.095800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.192 [2024-10-01 13:44:03.095817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.192 [2024-10-01 13:44:03.095856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.192 [2024-10-01 13:44:03.095907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.192 [2024-10-01 13:44:03.099201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.192 [2024-10-01 13:44:03.099328] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.192 [2024-10-01 13:44:03.099362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.192 [2024-10-01 13:44:03.099380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.192 [2024-10-01 13:44:03.099415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.192 [2024-10-01 13:44:03.099700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.192 [2024-10-01 13:44:03.099749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.192 [2024-10-01 13:44:03.099768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.192 [2024-10-01 13:44:03.099929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.192 [2024-10-01 13:44:03.106351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.192 [2024-10-01 13:44:03.106473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.192 [2024-10-01 13:44:03.106507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.192 [2024-10-01 13:44:03.106524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.192 [2024-10-01 13:44:03.106575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.192 [2024-10-01 13:44:03.106610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.192 [2024-10-01 13:44:03.106628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.192 [2024-10-01 13:44:03.106641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.192 [2024-10-01 13:44:03.106675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.192 [2024-10-01 13:44:03.109657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.192 [2024-10-01 13:44:03.109785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.192 [2024-10-01 13:44:03.109819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.192 [2024-10-01 13:44:03.109838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.192 [2024-10-01 13:44:03.109872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.192 [2024-10-01 13:44:03.109904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.192 [2024-10-01 13:44:03.109922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.192 [2024-10-01 13:44:03.109936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.192 [2024-10-01 13:44:03.109969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.192 [2024-10-01 13:44:03.117342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.192 [2024-10-01 13:44:03.117492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.192 [2024-10-01 13:44:03.117527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.192 [2024-10-01 13:44:03.117564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.192 [2024-10-01 13:44:03.117601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.192 [2024-10-01 13:44:03.117634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.192 [2024-10-01 13:44:03.117652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.192 [2024-10-01 13:44:03.117667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.192 [2024-10-01 13:44:03.117700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.192 [2024-10-01 13:44:03.120663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.193 [2024-10-01 13:44:03.120783] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.193 [2024-10-01 13:44:03.120823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.193 [2024-10-01 13:44:03.120842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.193 [2024-10-01 13:44:03.120877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.193 [2024-10-01 13:44:03.120909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.193 [2024-10-01 13:44:03.120927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.193 [2024-10-01 13:44:03.120941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.193 [2024-10-01 13:44:03.120973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.193 [2024-10-01 13:44:03.127475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.193 [2024-10-01 13:44:03.127615] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.193 [2024-10-01 13:44:03.127649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.193 [2024-10-01 13:44:03.127668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.193 [2024-10-01 13:44:03.127702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.193 [2024-10-01 13:44:03.127735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.193 [2024-10-01 13:44:03.127753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.193 [2024-10-01 13:44:03.127767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.193 [2024-10-01 13:44:03.127800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.193 [2024-10-01 13:44:03.131586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.193 [2024-10-01 13:44:03.131711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.193 [2024-10-01 13:44:03.131744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.193 [2024-10-01 13:44:03.131763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.193 [2024-10-01 13:44:03.131823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.193 [2024-10-01 13:44:03.131858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.193 [2024-10-01 13:44:03.131889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.193 [2024-10-01 13:44:03.131908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.193 [2024-10-01 13:44:03.131941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.193 [2024-10-01 13:44:03.138190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.193 [2024-10-01 13:44:03.138316] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.193 [2024-10-01 13:44:03.138350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.193 [2024-10-01 13:44:03.138369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.193 [2024-10-01 13:44:03.138403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.193 [2024-10-01 13:44:03.138434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.193 [2024-10-01 13:44:03.138451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.193 [2024-10-01 13:44:03.138465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.193 [2024-10-01 13:44:03.138497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.193 [2024-10-01 13:44:03.142169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.193 [2024-10-01 13:44:03.142300] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.193 [2024-10-01 13:44:03.142334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.193 [2024-10-01 13:44:03.142352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.193 [2024-10-01 13:44:03.142386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.193 [2024-10-01 13:44:03.142435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.193 [2024-10-01 13:44:03.142456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.193 [2024-10-01 13:44:03.142471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.193 [2024-10-01 13:44:03.142503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.193 [2024-10-01 13:44:03.148293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.193 [2024-10-01 13:44:03.148414] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.193 [2024-10-01 13:44:03.148448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.193 [2024-10-01 13:44:03.148466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.193 [2024-10-01 13:44:03.148501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.193 [2024-10-01 13:44:03.149472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.193 [2024-10-01 13:44:03.149515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.193 [2024-10-01 13:44:03.149548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.193 [2024-10-01 13:44:03.149774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.193 [2024-10-01 13:44:03.153022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.193 [2024-10-01 13:44:03.153151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.193 [2024-10-01 13:44:03.153186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.193 [2024-10-01 13:44:03.153205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.193 [2024-10-01 13:44:03.153239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.193 [2024-10-01 13:44:03.153272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.193 [2024-10-01 13:44:03.153289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.193 [2024-10-01 13:44:03.153304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.194 [2024-10-01 13:44:03.153336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.194 [2024-10-01 13:44:03.160677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.194 [2024-10-01 13:44:03.160807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.194 [2024-10-01 13:44:03.160841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.194 [2024-10-01 13:44:03.160860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.194 [2024-10-01 13:44:03.160897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.194 [2024-10-01 13:44:03.160930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.194 [2024-10-01 13:44:03.160947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.194 [2024-10-01 13:44:03.160961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.194 [2024-10-01 13:44:03.160994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.194 [2024-10-01 13:44:03.163890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.194 [2024-10-01 13:44:03.164011] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.194 [2024-10-01 13:44:03.164050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.194 [2024-10-01 13:44:03.164069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.194 [2024-10-01 13:44:03.164104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.194 [2024-10-01 13:44:03.164143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.194 [2024-10-01 13:44:03.164161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.194 [2024-10-01 13:44:03.164176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.194 [2024-10-01 13:44:03.164207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.194 [2024-10-01 13:44:03.170783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.194 [2024-10-01 13:44:03.170910] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.194 [2024-10-01 13:44:03.170944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.194 [2024-10-01 13:44:03.170996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.194 [2024-10-01 13:44:03.171033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.194 [2024-10-01 13:44:03.171066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.194 [2024-10-01 13:44:03.171084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.194 [2024-10-01 13:44:03.171107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.194 [2024-10-01 13:44:03.171376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.194 [2024-10-01 13:44:03.174950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.194 [2024-10-01 13:44:03.175114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.194 [2024-10-01 13:44:03.175149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.194 [2024-10-01 13:44:03.175168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.194 [2024-10-01 13:44:03.175204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.194 [2024-10-01 13:44:03.175237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.194 [2024-10-01 13:44:03.175256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.194 [2024-10-01 13:44:03.175272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.194 [2024-10-01 13:44:03.175304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.194 [2024-10-01 13:44:03.181484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.194 [2024-10-01 13:44:03.181622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.194 [2024-10-01 13:44:03.181656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.194 [2024-10-01 13:44:03.181675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.194 [2024-10-01 13:44:03.181709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.194 [2024-10-01 13:44:03.181743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.194 [2024-10-01 13:44:03.181760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.194 [2024-10-01 13:44:03.181775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.194 [2024-10-01 13:44:03.181807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.194 [2024-10-01 13:44:03.185075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.194 [2024-10-01 13:44:03.185194] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.194 [2024-10-01 13:44:03.185227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.194 [2024-10-01 13:44:03.185246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.194 [2024-10-01 13:44:03.185280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.194 [2024-10-01 13:44:03.185338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.194 [2024-10-01 13:44:03.185359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.194 [2024-10-01 13:44:03.185374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.194 [2024-10-01 13:44:03.185674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.194 [2024-10-01 13:44:03.192433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.194 [2024-10-01 13:44:03.192570] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.194 [2024-10-01 13:44:03.192605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.194 [2024-10-01 13:44:03.192630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.194 [2024-10-01 13:44:03.192669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.194 [2024-10-01 13:44:03.192702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.194 [2024-10-01 13:44:03.192720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.194 [2024-10-01 13:44:03.192735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.194 [2024-10-01 13:44:03.192768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.194 [2024-10-01 13:44:03.195733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.194 [2024-10-01 13:44:03.195852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.194 [2024-10-01 13:44:03.195902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.194 [2024-10-01 13:44:03.195923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.194 [2024-10-01 13:44:03.195957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.194 [2024-10-01 13:44:03.195990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.194 [2024-10-01 13:44:03.196007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.194 [2024-10-01 13:44:03.196022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.194 [2024-10-01 13:44:03.196054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.194 [2024-10-01 13:44:03.203463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.194 [2024-10-01 13:44:03.203607] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.194 [2024-10-01 13:44:03.203642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.194 [2024-10-01 13:44:03.203661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.194 [2024-10-01 13:44:03.203696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.194 [2024-10-01 13:44:03.203741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.194 [2024-10-01 13:44:03.203760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.194 [2024-10-01 13:44:03.203775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.194 [2024-10-01 13:44:03.203808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.194 [2024-10-01 13:44:03.206624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.194 [2024-10-01 13:44:03.206743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.194 [2024-10-01 13:44:03.206782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.194 [2024-10-01 13:44:03.206802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.194 [2024-10-01 13:44:03.206836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.194 [2024-10-01 13:44:03.206868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.194 [2024-10-01 13:44:03.206886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.194 [2024-10-01 13:44:03.206900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.195 [2024-10-01 13:44:03.206932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.195 [2024-10-01 13:44:03.213585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.195 [2024-10-01 13:44:03.213734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.195 [2024-10-01 13:44:03.213770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.195 [2024-10-01 13:44:03.213789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.195 [2024-10-01 13:44:03.213825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.195 [2024-10-01 13:44:03.213859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.195 [2024-10-01 13:44:03.213878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.195 [2024-10-01 13:44:03.213892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.195 [2024-10-01 13:44:03.214166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.195 [2024-10-01 13:44:03.217659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.195 [2024-10-01 13:44:03.217788] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.195 [2024-10-01 13:44:03.217822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.195 [2024-10-01 13:44:03.217841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.195 [2024-10-01 13:44:03.217876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.195 [2024-10-01 13:44:03.217909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.195 [2024-10-01 13:44:03.217927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.195 [2024-10-01 13:44:03.217941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.195 [2024-10-01 13:44:03.217973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.195 [2024-10-01 13:44:03.224124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.195 [2024-10-01 13:44:03.224246] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.195 [2024-10-01 13:44:03.224281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.195 [2024-10-01 13:44:03.224343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.195 [2024-10-01 13:44:03.224383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.195 [2024-10-01 13:44:03.224416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.195 [2024-10-01 13:44:03.224435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.195 [2024-10-01 13:44:03.224449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.195 [2024-10-01 13:44:03.224483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.195 [2024-10-01 13:44:03.227760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.195 [2024-10-01 13:44:03.227895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.195 [2024-10-01 13:44:03.227931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.195 [2024-10-01 13:44:03.227949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.195 [2024-10-01 13:44:03.227984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.195 [2024-10-01 13:44:03.228017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.195 [2024-10-01 13:44:03.228035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.195 [2024-10-01 13:44:03.228049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.195 [2024-10-01 13:44:03.228313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.195 [2024-10-01 13:44:03.234950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.195 [2024-10-01 13:44:03.235074] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.195 [2024-10-01 13:44:03.235108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.195 [2024-10-01 13:44:03.235126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.195 [2024-10-01 13:44:03.235160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.195 [2024-10-01 13:44:03.235192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.195 [2024-10-01 13:44:03.235210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.195 [2024-10-01 13:44:03.235224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.195 [2024-10-01 13:44:03.235255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.195 [2024-10-01 13:44:03.238222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.195 [2024-10-01 13:44:03.238343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.195 [2024-10-01 13:44:03.238376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.195 [2024-10-01 13:44:03.238395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.195 [2024-10-01 13:44:03.238429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.195 [2024-10-01 13:44:03.238461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.195 [2024-10-01 13:44:03.238479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.195 [2024-10-01 13:44:03.238511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.195 [2024-10-01 13:44:03.238563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.195 [2024-10-01 13:44:03.245882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.195 [2024-10-01 13:44:03.246006] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.195 [2024-10-01 13:44:03.246039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.195 [2024-10-01 13:44:03.246058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.195 [2024-10-01 13:44:03.246091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.195 [2024-10-01 13:44:03.246124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.195 [2024-10-01 13:44:03.246157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.195 [2024-10-01 13:44:03.246176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.195 [2024-10-01 13:44:03.246209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.195 [2024-10-01 13:44:03.249035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.195 [2024-10-01 13:44:03.249154] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.195 [2024-10-01 13:44:03.249187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.195 [2024-10-01 13:44:03.249205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.195 [2024-10-01 13:44:03.249238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.195 [2024-10-01 13:44:03.249272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.195 [2024-10-01 13:44:03.249290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.195 [2024-10-01 13:44:03.249304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.195 [2024-10-01 13:44:03.249335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.195 [2024-10-01 13:44:03.255985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.195 [2024-10-01 13:44:03.256108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.195 [2024-10-01 13:44:03.256149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.195 [2024-10-01 13:44:03.256168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.195 [2024-10-01 13:44:03.256201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.195 [2024-10-01 13:44:03.256233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.195 [2024-10-01 13:44:03.256250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.195 [2024-10-01 13:44:03.256265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.195 [2024-10-01 13:44:03.256554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.195 [2024-10-01 13:44:03.259994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.195 [2024-10-01 13:44:03.260141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.195 [2024-10-01 13:44:03.260176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.195 [2024-10-01 13:44:03.260195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.195 [2024-10-01 13:44:03.260229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.195 [2024-10-01 13:44:03.260261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.195 [2024-10-01 13:44:03.260280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.195 [2024-10-01 13:44:03.260294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.195 [2024-10-01 13:44:03.260326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.195 [2024-10-01 13:44:03.266403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.195 [2024-10-01 13:44:03.266529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.196 [2024-10-01 13:44:03.266577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.196 [2024-10-01 13:44:03.266597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.196 [2024-10-01 13:44:03.266632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.196 [2024-10-01 13:44:03.266665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.196 [2024-10-01 13:44:03.266683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.196 [2024-10-01 13:44:03.266696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.196 [2024-10-01 13:44:03.266728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.196 [2024-10-01 13:44:03.270144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.196 [2024-10-01 13:44:03.270264] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.196 [2024-10-01 13:44:03.270297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.196 [2024-10-01 13:44:03.270315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.196 [2024-10-01 13:44:03.270348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.196 [2024-10-01 13:44:03.270381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.196 [2024-10-01 13:44:03.270399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.196 [2024-10-01 13:44:03.270414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.196 [2024-10-01 13:44:03.270445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.196 [2024-10-01 13:44:03.277563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.196 [2024-10-01 13:44:03.277697] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.196 [2024-10-01 13:44:03.277730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.196 [2024-10-01 13:44:03.277748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.196 [2024-10-01 13:44:03.277806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.196 [2024-10-01 13:44:03.277841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.196 [2024-10-01 13:44:03.277859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.196 [2024-10-01 13:44:03.277875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.196 [2024-10-01 13:44:03.277908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.196 [2024-10-01 13:44:03.280951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.196 [2024-10-01 13:44:03.281098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.196 [2024-10-01 13:44:03.281133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.196 [2024-10-01 13:44:03.281151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.196 [2024-10-01 13:44:03.281187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.196 [2024-10-01 13:44:03.281220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.196 [2024-10-01 13:44:03.281238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.196 [2024-10-01 13:44:03.281253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.196 [2024-10-01 13:44:03.282372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.196 [2024-10-01 13:44:03.288634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.196 [2024-10-01 13:44:03.288801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.196 [2024-10-01 13:44:03.288839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.196 [2024-10-01 13:44:03.288858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.196 [2024-10-01 13:44:03.288912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.196 [2024-10-01 13:44:03.288959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.196 [2024-10-01 13:44:03.288986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.196 [2024-10-01 13:44:03.289002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.196 [2024-10-01 13:44:03.289036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.196 [2024-10-01 13:44:03.291826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.196 [2024-10-01 13:44:03.291962] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.196 [2024-10-01 13:44:03.291996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.196 [2024-10-01 13:44:03.292015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.196 [2024-10-01 13:44:03.292055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.196 [2024-10-01 13:44:03.292088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.196 [2024-10-01 13:44:03.292106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.196 [2024-10-01 13:44:03.292120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.196 [2024-10-01 13:44:03.292177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.196 [2024-10-01 13:44:03.298756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.196 [2024-10-01 13:44:03.298895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.196 [2024-10-01 13:44:03.298930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.196 [2024-10-01 13:44:03.298949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.196 [2024-10-01 13:44:03.298984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.196 [2024-10-01 13:44:03.299255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.196 [2024-10-01 13:44:03.299295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.196 [2024-10-01 13:44:03.299314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.196 [2024-10-01 13:44:03.299448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.196 [2024-10-01 13:44:03.302812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.196 [2024-10-01 13:44:03.302936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.196 [2024-10-01 13:44:03.302969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.196 [2024-10-01 13:44:03.302988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.196 [2024-10-01 13:44:03.303021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.196 [2024-10-01 13:44:03.303053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.196 [2024-10-01 13:44:03.303072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.196 [2024-10-01 13:44:03.303086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.196 [2024-10-01 13:44:03.303118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.196 [2024-10-01 13:44:03.309278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.196 [2024-10-01 13:44:03.309405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.196 [2024-10-01 13:44:03.309439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.196 [2024-10-01 13:44:03.309458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.196 [2024-10-01 13:44:03.309492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.196 [2024-10-01 13:44:03.309525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.196 [2024-10-01 13:44:03.309559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.196 [2024-10-01 13:44:03.309575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.196 [2024-10-01 13:44:03.309609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.196 [2024-10-01 13:44:03.312916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.196 [2024-10-01 13:44:03.313037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.196 [2024-10-01 13:44:03.313097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.196 [2024-10-01 13:44:03.313118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.196 [2024-10-01 13:44:03.313153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.196 [2024-10-01 13:44:03.313186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.196 [2024-10-01 13:44:03.313204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.196 [2024-10-01 13:44:03.313219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.196 [2024-10-01 13:44:03.313491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.196 [2024-10-01 13:44:03.320252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.196 [2024-10-01 13:44:03.320374] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.196 [2024-10-01 13:44:03.320407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.196 [2024-10-01 13:44:03.320425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.197 [2024-10-01 13:44:03.320459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.197 [2024-10-01 13:44:03.320491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.197 [2024-10-01 13:44:03.320508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.197 [2024-10-01 13:44:03.320523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.197 [2024-10-01 13:44:03.320572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.197 [2024-10-01 13:44:03.323552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.197 [2024-10-01 13:44:03.323670] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.197 [2024-10-01 13:44:03.323703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.197 [2024-10-01 13:44:03.323721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.197 [2024-10-01 13:44:03.323755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.197 [2024-10-01 13:44:03.323787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.197 [2024-10-01 13:44:03.323805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.197 [2024-10-01 13:44:03.323819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.197 [2024-10-01 13:44:03.323858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.197 [2024-10-01 13:44:03.331295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.197 [2024-10-01 13:44:03.331418] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.197 [2024-10-01 13:44:03.331452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.197 [2024-10-01 13:44:03.331470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.197 [2024-10-01 13:44:03.331504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.197 [2024-10-01 13:44:03.331575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.197 [2024-10-01 13:44:03.331597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.197 [2024-10-01 13:44:03.331611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.197 [2024-10-01 13:44:03.331644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.197 [2024-10-01 13:44:03.334522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.197 [2024-10-01 13:44:03.334664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.197 [2024-10-01 13:44:03.334697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.197 [2024-10-01 13:44:03.334715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.197 [2024-10-01 13:44:03.334749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.197 [2024-10-01 13:44:03.334790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.197 [2024-10-01 13:44:03.334811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.197 [2024-10-01 13:44:03.334825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.197 [2024-10-01 13:44:03.334857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.197 [2024-10-01 13:44:03.341507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.197 [2024-10-01 13:44:03.341647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.197 [2024-10-01 13:44:03.341686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.197 [2024-10-01 13:44:03.341712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.197 [2024-10-01 13:44:03.341748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.197 [2024-10-01 13:44:03.341782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.197 [2024-10-01 13:44:03.341799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.197 [2024-10-01 13:44:03.341814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.197 [2024-10-01 13:44:03.341846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.197 [2024-10-01 13:44:03.345674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.197 [2024-10-01 13:44:03.345797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.197 [2024-10-01 13:44:03.345830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.197 [2024-10-01 13:44:03.345848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.197 [2024-10-01 13:44:03.345882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.197 [2024-10-01 13:44:03.345914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.197 [2024-10-01 13:44:03.345932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.197 [2024-10-01 13:44:03.345946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.197 [2024-10-01 13:44:03.345978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.197 [2024-10-01 13:44:03.352463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.197 [2024-10-01 13:44:03.352604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.197 [2024-10-01 13:44:03.352639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.197 [2024-10-01 13:44:03.352658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.197 [2024-10-01 13:44:03.352692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.197 [2024-10-01 13:44:03.352733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.197 [2024-10-01 13:44:03.352751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.197 [2024-10-01 13:44:03.352765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.197 [2024-10-01 13:44:03.352798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.197 [2024-10-01 13:44:03.356082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.197 [2024-10-01 13:44:03.356203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.197 [2024-10-01 13:44:03.356237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.197 [2024-10-01 13:44:03.356255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.197 [2024-10-01 13:44:03.356289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.197 [2024-10-01 13:44:03.356323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.197 [2024-10-01 13:44:03.356341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.197 [2024-10-01 13:44:03.356356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.197 [2024-10-01 13:44:03.356388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.197 [2024-10-01 13:44:03.363716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.197 [2024-10-01 13:44:03.363856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.197 [2024-10-01 13:44:03.363907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.197 [2024-10-01 13:44:03.363928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.197 [2024-10-01 13:44:03.363965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.197 [2024-10-01 13:44:03.363998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.197 [2024-10-01 13:44:03.364015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.197 [2024-10-01 13:44:03.364034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.197 [2024-10-01 13:44:03.364088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.197 [2024-10-01 13:44:03.367263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.197 [2024-10-01 13:44:03.367394] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.197 [2024-10-01 13:44:03.367429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.197 [2024-10-01 13:44:03.367471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.197 [2024-10-01 13:44:03.367509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.197 [2024-10-01 13:44:03.367569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.197 [2024-10-01 13:44:03.367612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.197 [2024-10-01 13:44:03.367633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.197 [2024-10-01 13:44:03.367670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.197 [2024-10-01 13:44:03.375278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.197 [2024-10-01 13:44:03.375410] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.197 [2024-10-01 13:44:03.375445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.197 [2024-10-01 13:44:03.375464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.197 [2024-10-01 13:44:03.375499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.197 [2024-10-01 13:44:03.375547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.197 [2024-10-01 13:44:03.375569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.198 [2024-10-01 13:44:03.375584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.198 [2024-10-01 13:44:03.375618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.198 [2024-10-01 13:44:03.377365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.198 [2024-10-01 13:44:03.377488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.198 [2024-10-01 13:44:03.377522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.198 [2024-10-01 13:44:03.377557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.198 [2024-10-01 13:44:03.378494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.198 [2024-10-01 13:44:03.378728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.198 [2024-10-01 13:44:03.378762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.198 [2024-10-01 13:44:03.378780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.198 [2024-10-01 13:44:03.378824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.198 [2024-10-01 13:44:03.385809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.198 [2024-10-01 13:44:03.385946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.198 [2024-10-01 13:44:03.385981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.198 [2024-10-01 13:44:03.386000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.198 [2024-10-01 13:44:03.386034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.198 [2024-10-01 13:44:03.386067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.198 [2024-10-01 13:44:03.386085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.198 [2024-10-01 13:44:03.386125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.198 [2024-10-01 13:44:03.386161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.198 [2024-10-01 13:44:03.387462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.198 [2024-10-01 13:44:03.387592] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.198 [2024-10-01 13:44:03.387625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.198 [2024-10-01 13:44:03.387644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.198 [2024-10-01 13:44:03.389015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.198 [2024-10-01 13:44:03.389978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.198 [2024-10-01 13:44:03.390019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.198 [2024-10-01 13:44:03.390038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.198 [2024-10-01 13:44:03.390184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.198 [2024-10-01 13:44:03.396702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.198 [2024-10-01 13:44:03.397065] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.198 [2024-10-01 13:44:03.397114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.198 [2024-10-01 13:44:03.397137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.198 [2024-10-01 13:44:03.397225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.198 [2024-10-01 13:44:03.397262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.198 [2024-10-01 13:44:03.397281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.198 [2024-10-01 13:44:03.397298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.198 [2024-10-01 13:44:03.397332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.198 [2024-10-01 13:44:03.398736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.198 [2024-10-01 13:44:03.398854] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.198 [2024-10-01 13:44:03.398886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.198 [2024-10-01 13:44:03.398905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.198 [2024-10-01 13:44:03.400017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.198 [2024-10-01 13:44:03.400683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.198 [2024-10-01 13:44:03.400731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.198 [2024-10-01 13:44:03.400751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.198 [2024-10-01 13:44:03.400842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.198 [2024-10-01 13:44:03.406905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.198 [2024-10-01 13:44:03.407066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.198 [2024-10-01 13:44:03.407101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.198 [2024-10-01 13:44:03.407120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.198 [2024-10-01 13:44:03.407155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.198 [2024-10-01 13:44:03.407188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.198 [2024-10-01 13:44:03.407205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.198 [2024-10-01 13:44:03.407219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.198 [2024-10-01 13:44:03.407253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.198 [2024-10-01 13:44:03.408835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.198 [2024-10-01 13:44:03.408961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.198 [2024-10-01 13:44:03.408995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.198 [2024-10-01 13:44:03.409014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.198 [2024-10-01 13:44:03.409048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.198 [2024-10-01 13:44:03.409080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.198 [2024-10-01 13:44:03.409098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.198 [2024-10-01 13:44:03.409112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.198 [2024-10-01 13:44:03.409144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.198 [2024-10-01 13:44:03.417034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.198 [2024-10-01 13:44:03.417161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.198 [2024-10-01 13:44:03.417195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.198 [2024-10-01 13:44:03.417214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.198 [2024-10-01 13:44:03.417248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.198 [2024-10-01 13:44:03.417280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.198 [2024-10-01 13:44:03.417297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.198 [2024-10-01 13:44:03.417311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.198 [2024-10-01 13:44:03.417343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.198 [2024-10-01 13:44:03.420216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.198 [2024-10-01 13:44:03.420354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.198 [2024-10-01 13:44:03.420387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.198 [2024-10-01 13:44:03.420405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.198 [2024-10-01 13:44:03.420458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.198 [2024-10-01 13:44:03.420493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.198 [2024-10-01 13:44:03.420511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.199 [2024-10-01 13:44:03.420526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.199 [2024-10-01 13:44:03.420576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.199 [2024-10-01 13:44:03.427237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.199 [2024-10-01 13:44:03.427369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.199 [2024-10-01 13:44:03.427403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.199 [2024-10-01 13:44:03.427423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.199 [2024-10-01 13:44:03.427458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.199 [2024-10-01 13:44:03.427490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.199 [2024-10-01 13:44:03.427507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.199 [2024-10-01 13:44:03.427522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.199 [2024-10-01 13:44:03.427571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.199 [2024-10-01 13:44:03.430883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.199 [2024-10-01 13:44:03.431007] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.199 [2024-10-01 13:44:03.431041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.199 [2024-10-01 13:44:03.431059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.199 [2024-10-01 13:44:03.431093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.199 [2024-10-01 13:44:03.431125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.199 [2024-10-01 13:44:03.431144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.199 [2024-10-01 13:44:03.431159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.199 [2024-10-01 13:44:03.431191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.199 [2024-10-01 13:44:03.437348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.199 [2024-10-01 13:44:03.437478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.199 [2024-10-01 13:44:03.437512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.199 [2024-10-01 13:44:03.437532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.199 [2024-10-01 13:44:03.438489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.199 [2024-10-01 13:44:03.438737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.199 [2024-10-01 13:44:03.438776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.199 [2024-10-01 13:44:03.438816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.199 [2024-10-01 13:44:03.438865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.199 [2024-10-01 13:44:03.441038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.199 [2024-10-01 13:44:03.441929] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.199 [2024-10-01 13:44:03.441978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.199 [2024-10-01 13:44:03.442000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.199 [2024-10-01 13:44:03.442193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.199 [2024-10-01 13:44:03.442303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.199 [2024-10-01 13:44:03.442329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.199 [2024-10-01 13:44:03.442344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.199 [2024-10-01 13:44:03.442378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.199 [2024-10-01 13:44:03.447455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.199 [2024-10-01 13:44:03.447600] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.199 [2024-10-01 13:44:03.447638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.199 [2024-10-01 13:44:03.447657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.199 [2024-10-01 13:44:03.447693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.199 [2024-10-01 13:44:03.447745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.199 [2024-10-01 13:44:03.447767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.199 [2024-10-01 13:44:03.447782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.199 [2024-10-01 13:44:03.447815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.199 [2024-10-01 13:44:03.451142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.199 [2024-10-01 13:44:03.451273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.199 [2024-10-01 13:44:03.451312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.199 [2024-10-01 13:44:03.451332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.199 [2024-10-01 13:44:03.452109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.199 [2024-10-01 13:44:03.452352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.199 [2024-10-01 13:44:03.452390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.199 [2024-10-01 13:44:03.452408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.199 [2024-10-01 13:44:03.452452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.199 [2024-10-01 13:44:03.457577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.199 [2024-10-01 13:44:03.457713] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.199 [2024-10-01 13:44:03.457776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.199 [2024-10-01 13:44:03.457799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.199 [2024-10-01 13:44:03.458913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.199 [2024-10-01 13:44:03.459153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.199 [2024-10-01 13:44:03.459195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.199 [2024-10-01 13:44:03.459225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.199 [2024-10-01 13:44:03.460432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.199 [2024-10-01 13:44:03.461247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.199 [2024-10-01 13:44:03.461372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.199 [2024-10-01 13:44:03.461407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.199 [2024-10-01 13:44:03.461425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.199 [2024-10-01 13:44:03.461707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.199 [2024-10-01 13:44:03.461895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.199 [2024-10-01 13:44:03.461934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.199 [2024-10-01 13:44:03.461952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.199 [2024-10-01 13:44:03.462066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.199 [2024-10-01 13:44:03.468674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.199 [2024-10-01 13:44:03.468816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.199 [2024-10-01 13:44:03.468864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.199 [2024-10-01 13:44:03.468885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.199 [2024-10-01 13:44:03.468920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.199 [2024-10-01 13:44:03.468953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.199 [2024-10-01 13:44:03.468971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.199 [2024-10-01 13:44:03.468985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.199 [2024-10-01 13:44:03.469018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.199 [2024-10-01 13:44:03.472111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.199 [2024-10-01 13:44:03.472252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.199 [2024-10-01 13:44:03.472287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.199 [2024-10-01 13:44:03.472306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.199 [2024-10-01 13:44:03.472340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.199 [2024-10-01 13:44:03.472408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.199 [2024-10-01 13:44:03.472431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.199 [2024-10-01 13:44:03.472446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.199 [2024-10-01 13:44:03.472479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.199 [2024-10-01 13:44:03.480102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.199 [2024-10-01 13:44:03.480305] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.200 [2024-10-01 13:44:03.480345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.200 [2024-10-01 13:44:03.480366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.200 [2024-10-01 13:44:03.480405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.200 [2024-10-01 13:44:03.480438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.200 [2024-10-01 13:44:03.480456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.200 [2024-10-01 13:44:03.480472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.200 [2024-10-01 13:44:03.480506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.200 [2024-10-01 13:44:03.482224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.200 [2024-10-01 13:44:03.483276] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.200 [2024-10-01 13:44:03.483329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.200 [2024-10-01 13:44:03.483351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.200 [2024-10-01 13:44:03.483565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.200 [2024-10-01 13:44:03.483618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.200 [2024-10-01 13:44:03.483639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.200 [2024-10-01 13:44:03.483654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.200 [2024-10-01 13:44:03.483688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.200 [2024-10-01 13:44:03.490560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.200 [2024-10-01 13:44:03.490696] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.200 [2024-10-01 13:44:03.490742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.200 [2024-10-01 13:44:03.490761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.200 [2024-10-01 13:44:03.490795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.200 [2024-10-01 13:44:03.490828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.200 [2024-10-01 13:44:03.490846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.200 [2024-10-01 13:44:03.490861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.200 [2024-10-01 13:44:03.490894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.200 [2024-10-01 13:44:03.492329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.200 [2024-10-01 13:44:03.492460] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.200 [2024-10-01 13:44:03.492504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.200 [2024-10-01 13:44:03.492549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.200 [2024-10-01 13:44:03.493909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.200 [2024-10-01 13:44:03.494886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.200 [2024-10-01 13:44:03.494930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.200 [2024-10-01 13:44:03.494949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.200 [2024-10-01 13:44:03.495097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.200 [2024-10-01 13:44:03.501491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.200 [2024-10-01 13:44:03.501772] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.200 [2024-10-01 13:44:03.501822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.200 [2024-10-01 13:44:03.501847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.200 [2024-10-01 13:44:03.501930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.200 [2024-10-01 13:44:03.501966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.200 [2024-10-01 13:44:03.501984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.200 [2024-10-01 13:44:03.501998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.200 [2024-10-01 13:44:03.502034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.200 [2024-10-01 13:44:03.503565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.200 [2024-10-01 13:44:03.504791] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.200 [2024-10-01 13:44:03.504840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.200 [2024-10-01 13:44:03.504861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.200 [2024-10-01 13:44:03.505589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.200 [2024-10-01 13:44:03.505705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.200 [2024-10-01 13:44:03.505740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.200 [2024-10-01 13:44:03.505758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.200 [2024-10-01 13:44:03.505799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.200 [2024-10-01 13:44:03.511606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.200 [2024-10-01 13:44:03.511742] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.200 [2024-10-01 13:44:03.511789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.200 [2024-10-01 13:44:03.511837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.200 [2024-10-01 13:44:03.511895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.200 [2024-10-01 13:44:03.511932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.200 [2024-10-01 13:44:03.511951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.200 [2024-10-01 13:44:03.511965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.200 [2024-10-01 13:44:03.512925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.200 [2024-10-01 13:44:03.513659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.200 [2024-10-01 13:44:03.513777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.200 [2024-10-01 13:44:03.513820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.200 [2024-10-01 13:44:03.513841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.200 [2024-10-01 13:44:03.513888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.200 [2024-10-01 13:44:03.513923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.200 [2024-10-01 13:44:03.513941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.200 [2024-10-01 13:44:03.513956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.200 [2024-10-01 13:44:03.513988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.200 [2024-10-01 13:44:03.521708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.200 [2024-10-01 13:44:03.521835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.200 [2024-10-01 13:44:03.521879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.200 [2024-10-01 13:44:03.521900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.200 [2024-10-01 13:44:03.523270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.200 [2024-10-01 13:44:03.524283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.200 [2024-10-01 13:44:03.524327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.200 [2024-10-01 13:44:03.524346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.200 [2024-10-01 13:44:03.524485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.200 [2024-10-01 13:44:03.524531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.200 [2024-10-01 13:44:03.524646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.200 [2024-10-01 13:44:03.524694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.200 [2024-10-01 13:44:03.524716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.200 [2024-10-01 13:44:03.524751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.200 [2024-10-01 13:44:03.524783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.200 [2024-10-01 13:44:03.524819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.200 [2024-10-01 13:44:03.524834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.200 [2024-10-01 13:44:03.524867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.200 [2024-10-01 13:44:03.532743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.200 [2024-10-01 13:44:03.532885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.200 [2024-10-01 13:44:03.532930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.200 [2024-10-01 13:44:03.532951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.201 [2024-10-01 13:44:03.534051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.201 [2024-10-01 13:44:03.534742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.201 [2024-10-01 13:44:03.534784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.201 [2024-10-01 13:44:03.534803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.201 [2024-10-01 13:44:03.534914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.201 [2024-10-01 13:44:03.534962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.201 [2024-10-01 13:44:03.535057] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.201 [2024-10-01 13:44:03.535099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.201 [2024-10-01 13:44:03.535121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.201 [2024-10-01 13:44:03.535399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.201 [2024-10-01 13:44:03.535582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.201 [2024-10-01 13:44:03.535617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.201 [2024-10-01 13:44:03.535634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.201 [2024-10-01 13:44:03.535747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.201 [2024-10-01 13:44:03.542857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.201 [2024-10-01 13:44:03.542984] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.201 [2024-10-01 13:44:03.543027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.201 [2024-10-01 13:44:03.543047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.201 [2024-10-01 13:44:03.543081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.201 [2024-10-01 13:44:03.543114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.201 [2024-10-01 13:44:03.543142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.201 [2024-10-01 13:44:03.543168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.201 [2024-10-01 13:44:03.543213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.201 [2024-10-01 13:44:03.545666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.201 [2024-10-01 13:44:03.545810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.201 [2024-10-01 13:44:03.545845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.201 [2024-10-01 13:44:03.545863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.201 [2024-10-01 13:44:03.545897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.201 [2024-10-01 13:44:03.545930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.201 [2024-10-01 13:44:03.545948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.201 [2024-10-01 13:44:03.545962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.201 [2024-10-01 13:44:03.545994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.201 [2024-10-01 13:44:03.553491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.201 [2024-10-01 13:44:03.553628] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.201 [2024-10-01 13:44:03.553672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.201 [2024-10-01 13:44:03.553693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.201 [2024-10-01 13:44:03.553728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.201 [2024-10-01 13:44:03.553760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.201 [2024-10-01 13:44:03.553778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.201 [2024-10-01 13:44:03.553792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.201 [2024-10-01 13:44:03.553826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.201 [2024-10-01 13:44:03.556790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.201 [2024-10-01 13:44:03.556909] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.201 [2024-10-01 13:44:03.556951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.201 [2024-10-01 13:44:03.556972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.201 [2024-10-01 13:44:03.557006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.201 [2024-10-01 13:44:03.557038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.201 [2024-10-01 13:44:03.557056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.201 [2024-10-01 13:44:03.557070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.201 [2024-10-01 13:44:03.557101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.201 [2024-10-01 13:44:03.563801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.201 [2024-10-01 13:44:03.563937] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.201 [2024-10-01 13:44:03.563974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.201 [2024-10-01 13:44:03.563993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.201 [2024-10-01 13:44:03.564051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.201 [2024-10-01 13:44:03.564085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.201 [2024-10-01 13:44:03.564103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.201 [2024-10-01 13:44:03.564118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.201 [2024-10-01 13:44:03.564167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.201 [2024-10-01 13:44:03.568030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.201 [2024-10-01 13:44:03.568172] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.201 [2024-10-01 13:44:03.568217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.201 [2024-10-01 13:44:03.568238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.201 [2024-10-01 13:44:03.568275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.201 [2024-10-01 13:44:03.568308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.201 [2024-10-01 13:44:03.568326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.201 [2024-10-01 13:44:03.568340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.201 [2024-10-01 13:44:03.568373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.201 [2024-10-01 13:44:03.574640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.201 [2024-10-01 13:44:03.574764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.201 [2024-10-01 13:44:03.574806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.201 [2024-10-01 13:44:03.574827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.201 [2024-10-01 13:44:03.574861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.201 [2024-10-01 13:44:03.574894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.201 [2024-10-01 13:44:03.574912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.201 [2024-10-01 13:44:03.574926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.201 [2024-10-01 13:44:03.574958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.201 [2024-10-01 13:44:03.578227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.201 [2024-10-01 13:44:03.578366] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.201 [2024-10-01 13:44:03.578399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.201 [2024-10-01 13:44:03.578418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.201 [2024-10-01 13:44:03.578453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.201 [2024-10-01 13:44:03.578486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.201 [2024-10-01 13:44:03.578504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.201 [2024-10-01 13:44:03.578568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.201 [2024-10-01 13:44:03.578608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.201 [2024-10-01 13:44:03.585740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.201 [2024-10-01 13:44:03.585872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.201 [2024-10-01 13:44:03.585908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.201 [2024-10-01 13:44:03.585928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.201 [2024-10-01 13:44:03.585963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.202 [2024-10-01 13:44:03.585996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.202 [2024-10-01 13:44:03.586014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.202 [2024-10-01 13:44:03.586029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.202 [2024-10-01 13:44:03.586062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.202 [2024-10-01 13:44:03.589059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.202 [2024-10-01 13:44:03.589194] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.202 [2024-10-01 13:44:03.589239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.202 [2024-10-01 13:44:03.589261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.202 [2024-10-01 13:44:03.589297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.202 [2024-10-01 13:44:03.589330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.202 [2024-10-01 13:44:03.589348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.202 [2024-10-01 13:44:03.589362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.202 [2024-10-01 13:44:03.589394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.202 [2024-10-01 13:44:03.596719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.202 [2024-10-01 13:44:03.596844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.202 [2024-10-01 13:44:03.596888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.202 [2024-10-01 13:44:03.596908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.202 [2024-10-01 13:44:03.596943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.202 [2024-10-01 13:44:03.596975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.202 [2024-10-01 13:44:03.596992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.202 [2024-10-01 13:44:03.597007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.202 [2024-10-01 13:44:03.597039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.202 [2024-10-01 13:44:03.599946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.202 [2024-10-01 13:44:03.600071] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.202 [2024-10-01 13:44:03.600136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.202 [2024-10-01 13:44:03.600158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.202 [2024-10-01 13:44:03.600194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.202 [2024-10-01 13:44:03.600227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.202 [2024-10-01 13:44:03.600245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.202 [2024-10-01 13:44:03.600259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.202 [2024-10-01 13:44:03.600291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.202 [2024-10-01 13:44:03.606816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.202 [2024-10-01 13:44:03.606939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.202 [2024-10-01 13:44:03.606982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.202 [2024-10-01 13:44:03.607003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.202 [2024-10-01 13:44:03.607038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.202 [2024-10-01 13:44:03.607070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.202 [2024-10-01 13:44:03.607088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.202 [2024-10-01 13:44:03.607102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.202 [2024-10-01 13:44:03.607142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.202 [2024-10-01 13:44:03.610943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.202 [2024-10-01 13:44:03.611065] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.202 [2024-10-01 13:44:03.611107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.202 [2024-10-01 13:44:03.611128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.202 [2024-10-01 13:44:03.611162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.202 [2024-10-01 13:44:03.611195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.202 [2024-10-01 13:44:03.611213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.202 [2024-10-01 13:44:03.611227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.202 [2024-10-01 13:44:03.611259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.202 [2024-10-01 13:44:03.617515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.202 [2024-10-01 13:44:03.617724] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.202 [2024-10-01 13:44:03.617761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.202 [2024-10-01 13:44:03.617780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.202 [2024-10-01 13:44:03.617819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.202 [2024-10-01 13:44:03.617892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.202 [2024-10-01 13:44:03.617912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.202 [2024-10-01 13:44:03.617927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.202 [2024-10-01 13:44:03.619069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.202 [2024-10-01 13:44:03.621277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.202 [2024-10-01 13:44:03.621439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.202 [2024-10-01 13:44:03.621486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.202 [2024-10-01 13:44:03.621508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.202 [2024-10-01 13:44:03.621561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.202 [2024-10-01 13:44:03.621597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.202 [2024-10-01 13:44:03.621616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.202 [2024-10-01 13:44:03.621632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.202 [2024-10-01 13:44:03.621903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.202 [2024-10-01 13:44:03.627669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.202 [2024-10-01 13:44:03.627792] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.202 [2024-10-01 13:44:03.627827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.202 [2024-10-01 13:44:03.627846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.202 [2024-10-01 13:44:03.628818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.202 [2024-10-01 13:44:03.629068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.202 [2024-10-01 13:44:03.629109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.202 [2024-10-01 13:44:03.629134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.202 [2024-10-01 13:44:03.629193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.202 [2024-10-01 13:44:03.632277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.202 [2024-10-01 13:44:03.632453] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.202 [2024-10-01 13:44:03.632496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.202 [2024-10-01 13:44:03.632518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.202 [2024-10-01 13:44:03.632566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.202 [2024-10-01 13:44:03.632602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.202 [2024-10-01 13:44:03.632620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.203 [2024-10-01 13:44:03.632634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.203 [2024-10-01 13:44:03.632666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.203 [2024-10-01 13:44:03.640079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.203 [2024-10-01 13:44:03.640203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.203 [2024-10-01 13:44:03.640238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.203 [2024-10-01 13:44:03.640257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.203 [2024-10-01 13:44:03.640290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.203 [2024-10-01 13:44:03.640323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.203 [2024-10-01 13:44:03.640341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.203 [2024-10-01 13:44:03.640355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.203 [2024-10-01 13:44:03.640388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.203 [2024-10-01 13:44:03.643325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.203 [2024-10-01 13:44:03.643441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.203 [2024-10-01 13:44:03.643482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.203 [2024-10-01 13:44:03.643503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.203 [2024-10-01 13:44:03.643550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.203 [2024-10-01 13:44:03.643587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.203 [2024-10-01 13:44:03.643606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.203 [2024-10-01 13:44:03.643620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.203 [2024-10-01 13:44:03.643652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.203 [2024-10-01 13:44:03.650251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.203 [2024-10-01 13:44:03.650379] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.203 [2024-10-01 13:44:03.650424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.203 [2024-10-01 13:44:03.650445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.203 [2024-10-01 13:44:03.650480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.203 [2024-10-01 13:44:03.650513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.203 [2024-10-01 13:44:03.650531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.203 [2024-10-01 13:44:03.650567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.203 [2024-10-01 13:44:03.650602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.203 [2024-10-01 13:44:03.654635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.203 [2024-10-01 13:44:03.654798] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.203 [2024-10-01 13:44:03.654847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.203 [2024-10-01 13:44:03.654890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.203 [2024-10-01 13:44:03.654929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.203 [2024-10-01 13:44:03.654975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.203 [2024-10-01 13:44:03.654993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.203 [2024-10-01 13:44:03.655007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.203 [2024-10-01 13:44:03.655047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.203 [2024-10-01 13:44:03.661124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.203 [2024-10-01 13:44:03.661305] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.203 [2024-10-01 13:44:03.661371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.203 [2024-10-01 13:44:03.661407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.203 [2024-10-01 13:44:03.662579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.203 [2024-10-01 13:44:03.662845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.203 [2024-10-01 13:44:03.662884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.203 [2024-10-01 13:44:03.662904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.203 [2024-10-01 13:44:03.664009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.203 [2024-10-01 13:44:03.664774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.203 [2024-10-01 13:44:03.664894] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.203 [2024-10-01 13:44:03.664936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.203 [2024-10-01 13:44:03.664957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.203 [2024-10-01 13:44:03.664992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.203 [2024-10-01 13:44:03.665026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.203 [2024-10-01 13:44:03.665055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.203 [2024-10-01 13:44:03.665076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.203 [2024-10-01 13:44:03.665350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.203 8586.45 IOPS, 33.54 MiB/s [2024-10-01 13:44:03.672259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.203 [2024-10-01 13:44:03.672400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.203 [2024-10-01 13:44:03.672442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.203 [2024-10-01 13:44:03.672462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.203 [2024-10-01 13:44:03.672498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.203 [2024-10-01 13:44:03.672531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.203 [2024-10-01 13:44:03.672595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.203 [2024-10-01 13:44:03.672613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.203 [2024-10-01 13:44:03.672649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.203 [2024-10-01 13:44:03.675661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.203 [2024-10-01 13:44:03.675790] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.203 [2024-10-01 13:44:03.675824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.203 [2024-10-01 13:44:03.675844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.203 [2024-10-01 13:44:03.675890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.203 [2024-10-01 13:44:03.675927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.203 [2024-10-01 13:44:03.675945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.203 [2024-10-01 13:44:03.675960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.203 [2024-10-01 13:44:03.675993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.203 [2024-10-01 13:44:03.683562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.203 [2024-10-01 13:44:03.683690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.203 [2024-10-01 13:44:03.683724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.203 [2024-10-01 13:44:03.683743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.203 [2024-10-01 13:44:03.683785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.203 [2024-10-01 13:44:03.683820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.203 [2024-10-01 13:44:03.683838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.203 [2024-10-01 13:44:03.683853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.203 [2024-10-01 13:44:03.683911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.203 [2024-10-01 13:44:03.686663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.203 [2024-10-01 13:44:03.686957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.203 [2024-10-01 13:44:03.687003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.203 [2024-10-01 13:44:03.687026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.203 [2024-10-01 13:44:03.687069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.203 [2024-10-01 13:44:03.687105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.203 [2024-10-01 13:44:03.687123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.203 [2024-10-01 13:44:03.687137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.203 [2024-10-01 13:44:03.687170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.203 [2024-10-01 13:44:03.694765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.204 [2024-10-01 13:44:03.695127] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.204 [2024-10-01 13:44:03.695193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.204 [2024-10-01 13:44:03.695228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.204 [2024-10-01 13:44:03.695389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.204 [2024-10-01 13:44:03.695480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.204 [2024-10-01 13:44:03.695512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.204 [2024-10-01 13:44:03.695558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.204 [2024-10-01 13:44:03.695618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.204 [2024-10-01 13:44:03.698371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.204 [2024-10-01 13:44:03.698501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.204 [2024-10-01 13:44:03.698558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.204 [2024-10-01 13:44:03.698582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.204 [2024-10-01 13:44:03.698618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.204 [2024-10-01 13:44:03.698652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.204 [2024-10-01 13:44:03.698670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.204 [2024-10-01 13:44:03.698685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.204 [2024-10-01 13:44:03.698717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.204 [2024-10-01 13:44:03.704942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.204 [2024-10-01 13:44:03.705077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.204 [2024-10-01 13:44:03.705111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.204 [2024-10-01 13:44:03.705139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.204 [2024-10-01 13:44:03.705189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.204 [2024-10-01 13:44:03.705236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.204 [2024-10-01 13:44:03.705256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.204 [2024-10-01 13:44:03.705271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.204 [2024-10-01 13:44:03.705305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.204 [2024-10-01 13:44:03.708567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.204 [2024-10-01 13:44:03.708706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.204 [2024-10-01 13:44:03.708742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.204 [2024-10-01 13:44:03.708761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.204 [2024-10-01 13:44:03.708822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.204 [2024-10-01 13:44:03.708857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.204 [2024-10-01 13:44:03.708875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.204 [2024-10-01 13:44:03.708890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.204 [2024-10-01 13:44:03.708922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.204 [2024-10-01 13:44:03.715053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.204 [2024-10-01 13:44:03.716142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.204 [2024-10-01 13:44:03.716190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.204 [2024-10-01 13:44:03.716212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.204 [2024-10-01 13:44:03.716407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.204 [2024-10-01 13:44:03.716468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.204 [2024-10-01 13:44:03.716490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.204 [2024-10-01 13:44:03.716506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.204 [2024-10-01 13:44:03.716556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.204 [2024-10-01 13:44:03.719596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.204 [2024-10-01 13:44:03.719757] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.204 [2024-10-01 13:44:03.719799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.204 [2024-10-01 13:44:03.719819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.204 [2024-10-01 13:44:03.719854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.204 [2024-10-01 13:44:03.719897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.204 [2024-10-01 13:44:03.719918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.204 [2024-10-01 13:44:03.719932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.204 [2024-10-01 13:44:03.719964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.204 [2024-10-01 13:44:03.727366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.204 [2024-10-01 13:44:03.727492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.204 [2024-10-01 13:44:03.727525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.204 [2024-10-01 13:44:03.727563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.204 [2024-10-01 13:44:03.727600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.204 [2024-10-01 13:44:03.727632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.204 [2024-10-01 13:44:03.727650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.204 [2024-10-01 13:44:03.727693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.204 [2024-10-01 13:44:03.727729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.204 [2024-10-01 13:44:03.730644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.204 [2024-10-01 13:44:03.730771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.204 [2024-10-01 13:44:03.730805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.204 [2024-10-01 13:44:03.730824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.204 [2024-10-01 13:44:03.730858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.204 [2024-10-01 13:44:03.730891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.204 [2024-10-01 13:44:03.730908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.204 [2024-10-01 13:44:03.730923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.204 [2024-10-01 13:44:03.730954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.204 [2024-10-01 13:44:03.737531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.204 [2024-10-01 13:44:03.737667] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.204 [2024-10-01 13:44:03.737700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.204 [2024-10-01 13:44:03.737719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.204 [2024-10-01 13:44:03.737752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.204 [2024-10-01 13:44:03.737785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.204 [2024-10-01 13:44:03.737803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.204 [2024-10-01 13:44:03.737818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.204 [2024-10-01 13:44:03.737851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.204 [2024-10-01 13:44:03.741717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.204 [2024-10-01 13:44:03.741836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.204 [2024-10-01 13:44:03.741868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.204 [2024-10-01 13:44:03.741887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.204 [2024-10-01 13:44:03.741920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.204 [2024-10-01 13:44:03.741953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.204 [2024-10-01 13:44:03.741971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.204 [2024-10-01 13:44:03.741986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.204 [2024-10-01 13:44:03.742018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.204 [2024-10-01 13:44:03.748346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.204 [2024-10-01 13:44:03.748476] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.204 [2024-10-01 13:44:03.748549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.205 [2024-10-01 13:44:03.748573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.205 [2024-10-01 13:44:03.748609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.205 [2024-10-01 13:44:03.748643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.205 [2024-10-01 13:44:03.748661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.205 [2024-10-01 13:44:03.748675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.205 [2024-10-01 13:44:03.748708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.205 [2024-10-01 13:44:03.751960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.205 [2024-10-01 13:44:03.752117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.205 [2024-10-01 13:44:03.752164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.205 [2024-10-01 13:44:03.752185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.205 [2024-10-01 13:44:03.752221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.205 [2024-10-01 13:44:03.752254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.205 [2024-10-01 13:44:03.752272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.205 [2024-10-01 13:44:03.752288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.205 [2024-10-01 13:44:03.752320] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.205 [2024-10-01 13:44:03.759347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.205 [2024-10-01 13:44:03.759482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.205 [2024-10-01 13:44:03.759515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.205 [2024-10-01 13:44:03.759550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.205 [2024-10-01 13:44:03.759590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.205 [2024-10-01 13:44:03.759623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.205 [2024-10-01 13:44:03.759641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.205 [2024-10-01 13:44:03.759655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.205 [2024-10-01 13:44:03.759689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.205 [2024-10-01 13:44:03.762763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.205 [2024-10-01 13:44:03.762880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.205 [2024-10-01 13:44:03.762912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.205 [2024-10-01 13:44:03.762931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.205 [2024-10-01 13:44:03.762965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.205 [2024-10-01 13:44:03.763022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.205 [2024-10-01 13:44:03.763042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.205 [2024-10-01 13:44:03.763056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.205 [2024-10-01 13:44:03.763088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.205 [2024-10-01 13:44:03.770884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.205 [2024-10-01 13:44:03.771036] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.205 [2024-10-01 13:44:03.771071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.205 [2024-10-01 13:44:03.771101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.205 [2024-10-01 13:44:03.771136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.205 [2024-10-01 13:44:03.771169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.205 [2024-10-01 13:44:03.771194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.205 [2024-10-01 13:44:03.771221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.205 [2024-10-01 13:44:03.771270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.205 [2024-10-01 13:44:03.772856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.205 [2024-10-01 13:44:03.772982] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.205 [2024-10-01 13:44:03.773018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.205 [2024-10-01 13:44:03.773037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.205 [2024-10-01 13:44:03.774005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.205 [2024-10-01 13:44:03.774229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.205 [2024-10-01 13:44:03.774267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.205 [2024-10-01 13:44:03.774286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.205 [2024-10-01 13:44:03.774330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.205 [2024-10-01 13:44:03.781155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.205 [2024-10-01 13:44:03.781286] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.205 [2024-10-01 13:44:03.781321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.205 [2024-10-01 13:44:03.781341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.205 [2024-10-01 13:44:03.781375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.205 [2024-10-01 13:44:03.781408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.205 [2024-10-01 13:44:03.781425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.205 [2024-10-01 13:44:03.781440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.205 [2024-10-01 13:44:03.781503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.205 [2024-10-01 13:44:03.785331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.205 [2024-10-01 13:44:03.785455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.205 [2024-10-01 13:44:03.785489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.205 [2024-10-01 13:44:03.785508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.205 [2024-10-01 13:44:03.785557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.205 [2024-10-01 13:44:03.785594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.205 [2024-10-01 13:44:03.785612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.205 [2024-10-01 13:44:03.785627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.205 [2024-10-01 13:44:03.785659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.205 [2024-10-01 13:44:03.791933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.205 [2024-10-01 13:44:03.792059] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.205 [2024-10-01 13:44:03.792093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.205 [2024-10-01 13:44:03.792111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.205 [2024-10-01 13:44:03.792145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.205 [2024-10-01 13:44:03.792178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.205 [2024-10-01 13:44:03.792195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.205 [2024-10-01 13:44:03.792210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.205 [2024-10-01 13:44:03.792243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.205 [2024-10-01 13:44:03.795508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.205 [2024-10-01 13:44:03.795666] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.205 [2024-10-01 13:44:03.795700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.205 [2024-10-01 13:44:03.795719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.205 [2024-10-01 13:44:03.795753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.205 [2024-10-01 13:44:03.795786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.205 [2024-10-01 13:44:03.795804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.205 [2024-10-01 13:44:03.795818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.205 [2024-10-01 13:44:03.795850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.205 [2024-10-01 13:44:03.802989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.205 [2024-10-01 13:44:03.803142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.205 [2024-10-01 13:44:03.803177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.205 [2024-10-01 13:44:03.803223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.205 [2024-10-01 13:44:03.803262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.205 [2024-10-01 13:44:03.803295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.206 [2024-10-01 13:44:03.803313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.206 [2024-10-01 13:44:03.803327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.206 [2024-10-01 13:44:03.803361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.206 [2024-10-01 13:44:03.806385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.206 [2024-10-01 13:44:03.806521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.206 [2024-10-01 13:44:03.806570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.206 [2024-10-01 13:44:03.806590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.206 [2024-10-01 13:44:03.806626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.206 [2024-10-01 13:44:03.806666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.206 [2024-10-01 13:44:03.806686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.206 [2024-10-01 13:44:03.806701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.206 [2024-10-01 13:44:03.806733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.206 [2024-10-01 13:44:03.814230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.206 [2024-10-01 13:44:03.814401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.206 [2024-10-01 13:44:03.814442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.206 [2024-10-01 13:44:03.814464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.206 [2024-10-01 13:44:03.814500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.206 [2024-10-01 13:44:03.814547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.206 [2024-10-01 13:44:03.814568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.206 [2024-10-01 13:44:03.814584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.206 [2024-10-01 13:44:03.814618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.206 [2024-10-01 13:44:03.816490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.206 [2024-10-01 13:44:03.817566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.206 [2024-10-01 13:44:03.817613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.206 [2024-10-01 13:44:03.817634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.206 [2024-10-01 13:44:03.817842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.206 [2024-10-01 13:44:03.817892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.206 [2024-10-01 13:44:03.817940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.206 [2024-10-01 13:44:03.817956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.206 [2024-10-01 13:44:03.817992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.206 [2024-10-01 13:44:03.824702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.206 [2024-10-01 13:44:03.824833] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.206 [2024-10-01 13:44:03.824868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.206 [2024-10-01 13:44:03.824887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.206 [2024-10-01 13:44:03.824921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.206 [2024-10-01 13:44:03.824954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.206 [2024-10-01 13:44:03.824972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.206 [2024-10-01 13:44:03.824986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.206 [2024-10-01 13:44:03.825019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.206 [2024-10-01 13:44:03.828894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.206 [2024-10-01 13:44:03.829016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.206 [2024-10-01 13:44:03.829050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.206 [2024-10-01 13:44:03.829072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.206 [2024-10-01 13:44:03.829114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.206 [2024-10-01 13:44:03.829148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.206 [2024-10-01 13:44:03.829166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.206 [2024-10-01 13:44:03.829181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.206 [2024-10-01 13:44:03.829213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.206 [2024-10-01 13:44:03.835445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.206 [2024-10-01 13:44:03.835586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.206 [2024-10-01 13:44:03.835621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.206 [2024-10-01 13:44:03.835640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.206 [2024-10-01 13:44:03.835675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.206 [2024-10-01 13:44:03.835709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.206 [2024-10-01 13:44:03.835734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.206 [2024-10-01 13:44:03.835750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.206 [2024-10-01 13:44:03.835784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.206 [2024-10-01 13:44:03.839071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.206 [2024-10-01 13:44:03.839194] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.206 [2024-10-01 13:44:03.839228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.206 [2024-10-01 13:44:03.839247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.206 [2024-10-01 13:44:03.839281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.206 [2024-10-01 13:44:03.839313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.206 [2024-10-01 13:44:03.839331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.206 [2024-10-01 13:44:03.839346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.206 [2024-10-01 13:44:03.839382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.206 [2024-10-01 13:44:03.846574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.206 [2024-10-01 13:44:03.846704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.206 [2024-10-01 13:44:03.846738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.206 [2024-10-01 13:44:03.846757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.206 [2024-10-01 13:44:03.846792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.206 [2024-10-01 13:44:03.846825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.206 [2024-10-01 13:44:03.846843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.206 [2024-10-01 13:44:03.846858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.206 [2024-10-01 13:44:03.846891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.206 [2024-10-01 13:44:03.849991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.206 [2024-10-01 13:44:03.850116] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.206 [2024-10-01 13:44:03.850150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.206 [2024-10-01 13:44:03.850168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.206 [2024-10-01 13:44:03.850201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.206 [2024-10-01 13:44:03.850235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.206 [2024-10-01 13:44:03.850252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.206 [2024-10-01 13:44:03.850267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.206 [2024-10-01 13:44:03.850299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.206 [2024-10-01 13:44:03.857654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.206 [2024-10-01 13:44:03.857786] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.206 [2024-10-01 13:44:03.857826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.206 [2024-10-01 13:44:03.857845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.206 [2024-10-01 13:44:03.857905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.206 [2024-10-01 13:44:03.857939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.206 [2024-10-01 13:44:03.857956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.206 [2024-10-01 13:44:03.857970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.206 [2024-10-01 13:44:03.858003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.207 [2024-10-01 13:44:03.860979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.207 [2024-10-01 13:44:03.861102] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.207 [2024-10-01 13:44:03.861140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.207 [2024-10-01 13:44:03.861167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.207 [2024-10-01 13:44:03.861203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.207 [2024-10-01 13:44:03.861236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.207 [2024-10-01 13:44:03.861254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.207 [2024-10-01 13:44:03.861268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.207 [2024-10-01 13:44:03.861301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.207 [2024-10-01 13:44:03.868109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.207 [2024-10-01 13:44:03.868323] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.207 [2024-10-01 13:44:03.868360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.207 [2024-10-01 13:44:03.868380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.207 [2024-10-01 13:44:03.868416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.207 [2024-10-01 13:44:03.868450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.207 [2024-10-01 13:44:03.868468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.207 [2024-10-01 13:44:03.868484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.207 [2024-10-01 13:44:03.868518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.207 [2024-10-01 13:44:03.872380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.207 [2024-10-01 13:44:03.872548] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.207 [2024-10-01 13:44:03.872585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.207 [2024-10-01 13:44:03.872604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.207 [2024-10-01 13:44:03.872639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.207 [2024-10-01 13:44:03.872673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.207 [2024-10-01 13:44:03.872691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.207 [2024-10-01 13:44:03.872735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.207 [2024-10-01 13:44:03.872771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.207 [2024-10-01 13:44:03.879117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.207 [2024-10-01 13:44:03.879296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.207 [2024-10-01 13:44:03.879333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.207 [2024-10-01 13:44:03.879352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.207 [2024-10-01 13:44:03.879402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.207 [2024-10-01 13:44:03.879437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.207 [2024-10-01 13:44:03.879455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.207 [2024-10-01 13:44:03.879470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.207 [2024-10-01 13:44:03.879505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.207 [2024-10-01 13:44:03.882799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.207 [2024-10-01 13:44:03.882920] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.207 [2024-10-01 13:44:03.882954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.207 [2024-10-01 13:44:03.882974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.207 [2024-10-01 13:44:03.883007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.207 [2024-10-01 13:44:03.883040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.207 [2024-10-01 13:44:03.883058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.207 [2024-10-01 13:44:03.883073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.207 [2024-10-01 13:44:03.883105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.207 [2024-10-01 13:44:03.889269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.207 [2024-10-01 13:44:03.890322] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.207 [2024-10-01 13:44:03.890368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.207 [2024-10-01 13:44:03.890390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.207 [2024-10-01 13:44:03.890634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.207 [2024-10-01 13:44:03.890697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.207 [2024-10-01 13:44:03.890719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.207 [2024-10-01 13:44:03.890734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.207 [2024-10-01 13:44:03.890769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.207 [2024-10-01 13:44:03.893869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.207 [2024-10-01 13:44:03.893995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.207 [2024-10-01 13:44:03.894063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.207 [2024-10-01 13:44:03.894086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.207 [2024-10-01 13:44:03.894121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.207 [2024-10-01 13:44:03.894154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.207 [2024-10-01 13:44:03.894172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.207 [2024-10-01 13:44:03.894186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.207 [2024-10-01 13:44:03.895296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.207 [2024-10-01 13:44:03.901633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.207 [2024-10-01 13:44:03.901766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.207 [2024-10-01 13:44:03.901812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.207 [2024-10-01 13:44:03.901833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.207 [2024-10-01 13:44:03.901868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.207 [2024-10-01 13:44:03.901901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.207 [2024-10-01 13:44:03.901918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.207 [2024-10-01 13:44:03.901932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.207 [2024-10-01 13:44:03.901965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.207 [2024-10-01 13:44:03.904899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.207 [2024-10-01 13:44:03.905019] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.207 [2024-10-01 13:44:03.905061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.207 [2024-10-01 13:44:03.905083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.207 [2024-10-01 13:44:03.905118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.207 [2024-10-01 13:44:03.905152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.207 [2024-10-01 13:44:03.905169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.207 [2024-10-01 13:44:03.905183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.207 [2024-10-01 13:44:03.905215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.207 [2024-10-01 13:44:03.911857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.208 [2024-10-01 13:44:03.911998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.208 [2024-10-01 13:44:03.912034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.208 [2024-10-01 13:44:03.912054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.208 [2024-10-01 13:44:03.912097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.208 [2024-10-01 13:44:03.912152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.208 [2024-10-01 13:44:03.912172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.208 [2024-10-01 13:44:03.912186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.208 [2024-10-01 13:44:03.912450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.208 [2024-10-01 13:44:03.916056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.208 [2024-10-01 13:44:03.916179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.208 [2024-10-01 13:44:03.916218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.208 [2024-10-01 13:44:03.916237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.208 [2024-10-01 13:44:03.916271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.208 [2024-10-01 13:44:03.916303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.208 [2024-10-01 13:44:03.916322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.208 [2024-10-01 13:44:03.916336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.208 [2024-10-01 13:44:03.916367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.208 [2024-10-01 13:44:03.922612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.208 [2024-10-01 13:44:03.922743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.208 [2024-10-01 13:44:03.922784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.208 [2024-10-01 13:44:03.922804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.208 [2024-10-01 13:44:03.922838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.208 [2024-10-01 13:44:03.922871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.208 [2024-10-01 13:44:03.922889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.208 [2024-10-01 13:44:03.922911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.208 [2024-10-01 13:44:03.922945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.208 [2024-10-01 13:44:03.926221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.208 [2024-10-01 13:44:03.926353] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.208 [2024-10-01 13:44:03.926398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.208 [2024-10-01 13:44:03.926419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.208 [2024-10-01 13:44:03.926454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.208 [2024-10-01 13:44:03.926487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.208 [2024-10-01 13:44:03.926515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.208 [2024-10-01 13:44:03.926531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.208 [2024-10-01 13:44:03.926609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.208 [2024-10-01 13:44:03.933828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.208 [2024-10-01 13:44:03.934174] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.208 [2024-10-01 13:44:03.934228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.208 [2024-10-01 13:44:03.934250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.208 [2024-10-01 13:44:03.934297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.208 [2024-10-01 13:44:03.934334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.208 [2024-10-01 13:44:03.934352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.208 [2024-10-01 13:44:03.934366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.208 [2024-10-01 13:44:03.934402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.208 [2024-10-01 13:44:03.937270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.208 [2024-10-01 13:44:03.937395] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.208 [2024-10-01 13:44:03.937445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.208 [2024-10-01 13:44:03.937466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.208 [2024-10-01 13:44:03.937501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.208 [2024-10-01 13:44:03.937549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.208 [2024-10-01 13:44:03.937571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.208 [2024-10-01 13:44:03.937586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.208 [2024-10-01 13:44:03.937620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.208 [2024-10-01 13:44:03.945104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.208 [2024-10-01 13:44:03.945252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.208 [2024-10-01 13:44:03.945288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.208 [2024-10-01 13:44:03.945307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.208 [2024-10-01 13:44:03.945342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.208 [2024-10-01 13:44:03.945376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.208 [2024-10-01 13:44:03.945394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.208 [2024-10-01 13:44:03.945408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.208 [2024-10-01 13:44:03.945442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.208 [2024-10-01 13:44:03.948332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.208 [2024-10-01 13:44:03.948451] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.208 [2024-10-01 13:44:03.948484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.208 [2024-10-01 13:44:03.948549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.208 [2024-10-01 13:44:03.948591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.208 [2024-10-01 13:44:03.948625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.208 [2024-10-01 13:44:03.948645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.208 [2024-10-01 13:44:03.948660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.208 [2024-10-01 13:44:03.948692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.208 [2024-10-01 13:44:03.955235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.208 [2024-10-01 13:44:03.955364] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.208 [2024-10-01 13:44:03.955399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.208 [2024-10-01 13:44:03.955419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.208 [2024-10-01 13:44:03.955453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.208 [2024-10-01 13:44:03.955486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.208 [2024-10-01 13:44:03.955504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.208 [2024-10-01 13:44:03.955518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.208 [2024-10-01 13:44:03.955576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.208 [2024-10-01 13:44:03.959411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.208 [2024-10-01 13:44:03.959548] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.208 [2024-10-01 13:44:03.959585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.208 [2024-10-01 13:44:03.959605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.208 [2024-10-01 13:44:03.959640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.208 [2024-10-01 13:44:03.959673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.208 [2024-10-01 13:44:03.959691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.208 [2024-10-01 13:44:03.959706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.208 [2024-10-01 13:44:03.959738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.208 [2024-10-01 13:44:03.966025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.208 [2024-10-01 13:44:03.966166] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.208 [2024-10-01 13:44:03.966211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.208 [2024-10-01 13:44:03.966232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.209 [2024-10-01 13:44:03.966268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.209 [2024-10-01 13:44:03.966301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.209 [2024-10-01 13:44:03.966342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.209 [2024-10-01 13:44:03.966357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.209 [2024-10-01 13:44:03.966392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.209 [2024-10-01 13:44:03.969656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.209 [2024-10-01 13:44:03.969776] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.209 [2024-10-01 13:44:03.969810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.209 [2024-10-01 13:44:03.969829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.209 [2024-10-01 13:44:03.969862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.209 [2024-10-01 13:44:03.969895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.209 [2024-10-01 13:44:03.969912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.209 [2024-10-01 13:44:03.969927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.209 [2024-10-01 13:44:03.969959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.209 [2024-10-01 13:44:03.977081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.209 [2024-10-01 13:44:03.977217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.209 [2024-10-01 13:44:03.977262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.209 [2024-10-01 13:44:03.977283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.209 [2024-10-01 13:44:03.977318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.209 [2024-10-01 13:44:03.977352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.209 [2024-10-01 13:44:03.977369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.209 [2024-10-01 13:44:03.977384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.209 [2024-10-01 13:44:03.977417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.209 [2024-10-01 13:44:03.980506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.209 [2024-10-01 13:44:03.980639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.209 [2024-10-01 13:44:03.980686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.209 [2024-10-01 13:44:03.980707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.209 [2024-10-01 13:44:03.980741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.209 [2024-10-01 13:44:03.980773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.209 [2024-10-01 13:44:03.980792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.209 [2024-10-01 13:44:03.980806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.209 [2024-10-01 13:44:03.980838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.209 [2024-10-01 13:44:03.988287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.209 [2024-10-01 13:44:03.988422] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.209 [2024-10-01 13:44:03.988457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.209 [2024-10-01 13:44:03.988477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.209 [2024-10-01 13:44:03.988512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.209 [2024-10-01 13:44:03.988562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.209 [2024-10-01 13:44:03.988582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.209 [2024-10-01 13:44:03.988597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.209 [2024-10-01 13:44:03.988630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.209 [2024-10-01 13:44:03.991578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.209 [2024-10-01 13:44:03.991720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.209 [2024-10-01 13:44:03.991763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.209 [2024-10-01 13:44:03.991793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.209 [2024-10-01 13:44:03.991831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.209 [2024-10-01 13:44:03.991865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.209 [2024-10-01 13:44:03.991898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.209 [2024-10-01 13:44:03.991914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.209 [2024-10-01 13:44:03.991949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.209 [2024-10-01 13:44:03.998611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.209 [2024-10-01 13:44:03.998741] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.209 [2024-10-01 13:44:03.998776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.209 [2024-10-01 13:44:03.998795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.209 [2024-10-01 13:44:03.998829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.209 [2024-10-01 13:44:03.998863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.209 [2024-10-01 13:44:03.998880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.209 [2024-10-01 13:44:03.998895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.209 [2024-10-01 13:44:03.998928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.209 [2024-10-01 13:44:04.002831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.209 [2024-10-01 13:44:04.002956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.209 [2024-10-01 13:44:04.002991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.209 [2024-10-01 13:44:04.003010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.209 [2024-10-01 13:44:04.003068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.209 [2024-10-01 13:44:04.003116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.209 [2024-10-01 13:44:04.003140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.209 [2024-10-01 13:44:04.003155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.209 [2024-10-01 13:44:04.003193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.209 [2024-10-01 13:44:04.009556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.209 [2024-10-01 13:44:04.009681] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.209 [2024-10-01 13:44:04.009716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.209 [2024-10-01 13:44:04.009736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.209 [2024-10-01 13:44:04.009770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.209 [2024-10-01 13:44:04.009807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.209 [2024-10-01 13:44:04.009834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.209 [2024-10-01 13:44:04.009849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.209 [2024-10-01 13:44:04.009910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.209 [2024-10-01 13:44:04.013095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.209 [2024-10-01 13:44:04.013222] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.209 [2024-10-01 13:44:04.013255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.209 [2024-10-01 13:44:04.013273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.209 [2024-10-01 13:44:04.013307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.209 [2024-10-01 13:44:04.013339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.209 [2024-10-01 13:44:04.013357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.209 [2024-10-01 13:44:04.013371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.209 [2024-10-01 13:44:04.013403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.209 [2024-10-01 13:44:04.020721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.209 [2024-10-01 13:44:04.020902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.209 [2024-10-01 13:44:04.020939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.209 [2024-10-01 13:44:04.020958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.209 [2024-10-01 13:44:04.020994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.209 [2024-10-01 13:44:04.021027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.210 [2024-10-01 13:44:04.021046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.210 [2024-10-01 13:44:04.021094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.210 [2024-10-01 13:44:04.021132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.210 [2024-10-01 13:44:04.024192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.210 [2024-10-01 13:44:04.024320] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.210 [2024-10-01 13:44:04.024366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.210 [2024-10-01 13:44:04.024387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.210 [2024-10-01 13:44:04.024436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.210 [2024-10-01 13:44:04.024471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.210 [2024-10-01 13:44:04.024489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.210 [2024-10-01 13:44:04.024504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.210 [2024-10-01 13:44:04.024550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.210 [2024-10-01 13:44:04.032132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.210 [2024-10-01 13:44:04.032258] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.210 [2024-10-01 13:44:04.032291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.210 [2024-10-01 13:44:04.032309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.210 [2024-10-01 13:44:04.032343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.210 [2024-10-01 13:44:04.032380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.210 [2024-10-01 13:44:04.032411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.210 [2024-10-01 13:44:04.032429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.210 [2024-10-01 13:44:04.032463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.210 [2024-10-01 13:44:04.035377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.210 [2024-10-01 13:44:04.035505] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.210 [2024-10-01 13:44:04.035562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.210 [2024-10-01 13:44:04.035590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.210 [2024-10-01 13:44:04.035628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.210 [2024-10-01 13:44:04.035660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.210 [2024-10-01 13:44:04.035686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.210 [2024-10-01 13:44:04.035702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.210 [2024-10-01 13:44:04.035736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.210 [2024-10-01 13:44:04.042245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.210 [2024-10-01 13:44:04.042377] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.210 [2024-10-01 13:44:04.042446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.210 [2024-10-01 13:44:04.042470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.210 [2024-10-01 13:44:04.042506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.210 [2024-10-01 13:44:04.042789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.210 [2024-10-01 13:44:04.042835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.210 [2024-10-01 13:44:04.042854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.210 [2024-10-01 13:44:04.043009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.210 [2024-10-01 13:44:04.046322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.210 [2024-10-01 13:44:04.046445] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.210 [2024-10-01 13:44:04.046481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.210 [2024-10-01 13:44:04.046499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.210 [2024-10-01 13:44:04.046549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.210 [2024-10-01 13:44:04.046586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.210 [2024-10-01 13:44:04.046604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.210 [2024-10-01 13:44:04.046619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.210 [2024-10-01 13:44:04.046651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.210 [2024-10-01 13:44:04.052771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.210 [2024-10-01 13:44:04.052906] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.210 [2024-10-01 13:44:04.052954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.210 [2024-10-01 13:44:04.052975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.210 [2024-10-01 13:44:04.053010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.210 [2024-10-01 13:44:04.053044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.210 [2024-10-01 13:44:04.053061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.210 [2024-10-01 13:44:04.053076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.210 [2024-10-01 13:44:04.054184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.210 [2024-10-01 13:44:04.056424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.210 [2024-10-01 13:44:04.056560] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.210 [2024-10-01 13:44:04.056599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.210 [2024-10-01 13:44:04.056619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.210 [2024-10-01 13:44:04.056896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.210 [2024-10-01 13:44:04.057088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.210 [2024-10-01 13:44:04.057126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.210 [2024-10-01 13:44:04.057145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.210 [2024-10-01 13:44:04.057263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.210 [2024-10-01 13:44:04.063530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.210 [2024-10-01 13:44:04.063677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.210 [2024-10-01 13:44:04.063713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.210 [2024-10-01 13:44:04.063733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.210 [2024-10-01 13:44:04.063773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.210 [2024-10-01 13:44:04.063811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.210 [2024-10-01 13:44:04.063829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.210 [2024-10-01 13:44:04.063843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.210 [2024-10-01 13:44:04.063888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.210 [2024-10-01 13:44:04.066833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.210 [2024-10-01 13:44:04.066962] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.210 [2024-10-01 13:44:04.067010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.210 [2024-10-01 13:44:04.067032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.210 [2024-10-01 13:44:04.067066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.210 [2024-10-01 13:44:04.067098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.210 [2024-10-01 13:44:04.067116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.210 [2024-10-01 13:44:04.067131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.210 [2024-10-01 13:44:04.068276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.210 [2024-10-01 13:44:04.074420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.210 [2024-10-01 13:44:04.074560] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.210 [2024-10-01 13:44:04.074596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.210 [2024-10-01 13:44:04.074623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.210 [2024-10-01 13:44:04.074659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.210 [2024-10-01 13:44:04.074693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.210 [2024-10-01 13:44:04.074711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.210 [2024-10-01 13:44:04.074726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.210 [2024-10-01 13:44:04.074780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.210 [2024-10-01 13:44:04.077718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.211 [2024-10-01 13:44:04.077838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.211 [2024-10-01 13:44:04.077872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.211 [2024-10-01 13:44:04.077891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.211 [2024-10-01 13:44:04.077925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.211 [2024-10-01 13:44:04.077957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.211 [2024-10-01 13:44:04.077975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.211 [2024-10-01 13:44:04.077989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.211 [2024-10-01 13:44:04.078021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.211 [2024-10-01 13:44:04.084684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.211 [2024-10-01 13:44:04.084804] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.211 [2024-10-01 13:44:04.084847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.211 [2024-10-01 13:44:04.084868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.211 [2024-10-01 13:44:04.084902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.211 [2024-10-01 13:44:04.084934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.211 [2024-10-01 13:44:04.084951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.211 [2024-10-01 13:44:04.084966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.211 [2024-10-01 13:44:04.084998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.211 [2024-10-01 13:44:04.088768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.211 [2024-10-01 13:44:04.088884] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.211 [2024-10-01 13:44:04.088932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.211 [2024-10-01 13:44:04.088952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.211 [2024-10-01 13:44:04.088986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.211 [2024-10-01 13:44:04.089019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.211 [2024-10-01 13:44:04.089037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.211 [2024-10-01 13:44:04.089052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.211 [2024-10-01 13:44:04.089083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.211 [2024-10-01 13:44:04.095521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.211 [2024-10-01 13:44:04.095652] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.211 [2024-10-01 13:44:04.095695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.211 [2024-10-01 13:44:04.095738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.211 [2024-10-01 13:44:04.095775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.211 [2024-10-01 13:44:04.095808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.211 [2024-10-01 13:44:04.095826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.211 [2024-10-01 13:44:04.095840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.211 [2024-10-01 13:44:04.095872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.211 [2024-10-01 13:44:04.099107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.211 [2024-10-01 13:44:04.099225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.211 [2024-10-01 13:44:04.099259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.211 [2024-10-01 13:44:04.099278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.211 [2024-10-01 13:44:04.099311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.211 [2024-10-01 13:44:04.099344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.211 [2024-10-01 13:44:04.099361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.211 [2024-10-01 13:44:04.099376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.211 [2024-10-01 13:44:04.099408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.211 [2024-10-01 13:44:04.106381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.211 [2024-10-01 13:44:04.106499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.211 [2024-10-01 13:44:04.106531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.211 [2024-10-01 13:44:04.106568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.211 [2024-10-01 13:44:04.106603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.211 [2024-10-01 13:44:04.106636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.211 [2024-10-01 13:44:04.106654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.211 [2024-10-01 13:44:04.106668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.211 [2024-10-01 13:44:04.106700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.211 [2024-10-01 13:44:04.109693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.211 [2024-10-01 13:44:04.109809] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.211 [2024-10-01 13:44:04.109847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.211 [2024-10-01 13:44:04.109867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.211 [2024-10-01 13:44:04.109900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.211 [2024-10-01 13:44:04.109932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.211 [2024-10-01 13:44:04.109969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.211 [2024-10-01 13:44:04.109984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.211 [2024-10-01 13:44:04.110018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.211 [2024-10-01 13:44:04.117285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.211 [2024-10-01 13:44:04.117405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.211 [2024-10-01 13:44:04.117448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.211 [2024-10-01 13:44:04.117468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.211 [2024-10-01 13:44:04.117502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.211 [2024-10-01 13:44:04.117548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.211 [2024-10-01 13:44:04.117569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.211 [2024-10-01 13:44:04.117584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.211 [2024-10-01 13:44:04.117617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.211 [2024-10-01 13:44:04.120500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.211 [2024-10-01 13:44:04.120625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.211 [2024-10-01 13:44:04.120658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.211 [2024-10-01 13:44:04.120676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.211 [2024-10-01 13:44:04.120709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.211 [2024-10-01 13:44:04.120742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.211 [2024-10-01 13:44:04.120759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.211 [2024-10-01 13:44:04.120773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.211 [2024-10-01 13:44:04.120805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.211 [2024-10-01 13:44:04.127378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.211 [2024-10-01 13:44:04.127494] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.212 [2024-10-01 13:44:04.127555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.212 [2024-10-01 13:44:04.127577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.212 [2024-10-01 13:44:04.127610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.212 [2024-10-01 13:44:04.127643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.212 [2024-10-01 13:44:04.127660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.212 [2024-10-01 13:44:04.127674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.212 [2024-10-01 13:44:04.127946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.212 [2024-10-01 13:44:04.131428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.212 [2024-10-01 13:44:04.131562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.212 [2024-10-01 13:44:04.131597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.212 [2024-10-01 13:44:04.131615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.212 [2024-10-01 13:44:04.131649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.212 [2024-10-01 13:44:04.131681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.212 [2024-10-01 13:44:04.131700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.212 [2024-10-01 13:44:04.131714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.212 [2024-10-01 13:44:04.131746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.212 [2024-10-01 13:44:04.138038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.212 [2024-10-01 13:44:04.138164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.212 [2024-10-01 13:44:04.138200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.212 [2024-10-01 13:44:04.138218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.212 [2024-10-01 13:44:04.138252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.212 [2024-10-01 13:44:04.138285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.212 [2024-10-01 13:44:04.138302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.212 [2024-10-01 13:44:04.138317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.212 [2024-10-01 13:44:04.138349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.212 [2024-10-01 13:44:04.141723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.212 [2024-10-01 13:44:04.141845] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.212 [2024-10-01 13:44:04.141901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.212 [2024-10-01 13:44:04.141921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.212 [2024-10-01 13:44:04.141955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.212 [2024-10-01 13:44:04.141988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.212 [2024-10-01 13:44:04.142005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.212 [2024-10-01 13:44:04.142020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.212 [2024-10-01 13:44:04.142052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.212 [2024-10-01 13:44:04.149024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.212 [2024-10-01 13:44:04.149305] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.212 [2024-10-01 13:44:04.149351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.212 [2024-10-01 13:44:04.149372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.212 [2024-10-01 13:44:04.149446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.212 [2024-10-01 13:44:04.149483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.212 [2024-10-01 13:44:04.149501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.212 [2024-10-01 13:44:04.149516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.212 [2024-10-01 13:44:04.149563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.212 [2024-10-01 13:44:04.152527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.212 [2024-10-01 13:44:04.152657] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.212 [2024-10-01 13:44:04.152708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.212 [2024-10-01 13:44:04.152728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.212 [2024-10-01 13:44:04.152763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.212 [2024-10-01 13:44:04.152795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.212 [2024-10-01 13:44:04.152813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.212 [2024-10-01 13:44:04.152827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.212 [2024-10-01 13:44:04.152858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.212 [2024-10-01 13:44:04.160116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.212 [2024-10-01 13:44:04.160232] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.212 [2024-10-01 13:44:04.160280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.212 [2024-10-01 13:44:04.160301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.212 [2024-10-01 13:44:04.160334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.212 [2024-10-01 13:44:04.160367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.212 [2024-10-01 13:44:04.160384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.212 [2024-10-01 13:44:04.160398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.212 [2024-10-01 13:44:04.160430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.212 [2024-10-01 13:44:04.163318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.212 [2024-10-01 13:44:04.163432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.212 [2024-10-01 13:44:04.163478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.212 [2024-10-01 13:44:04.163498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.212 [2024-10-01 13:44:04.163532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.212 [2024-10-01 13:44:04.163583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.212 [2024-10-01 13:44:04.163602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.212 [2024-10-01 13:44:04.163632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.212 [2024-10-01 13:44:04.163666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.212 [2024-10-01 13:44:04.170213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.212 [2024-10-01 13:44:04.170330] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.212 [2024-10-01 13:44:04.170378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.212 [2024-10-01 13:44:04.170399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.212 [2024-10-01 13:44:04.170433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.212 [2024-10-01 13:44:04.170465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.212 [2024-10-01 13:44:04.170483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.212 [2024-10-01 13:44:04.170497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.212 [2024-10-01 13:44:04.170773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.212 [2024-10-01 13:44:04.174232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.212 [2024-10-01 13:44:04.174347] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.212 [2024-10-01 13:44:04.174389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.212 [2024-10-01 13:44:04.174410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.212 [2024-10-01 13:44:04.174443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.212 [2024-10-01 13:44:04.174474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.212 [2024-10-01 13:44:04.174492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.212 [2024-10-01 13:44:04.174506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.212 [2024-10-01 13:44:04.174553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.212 [2024-10-01 13:44:04.180690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.212 [2024-10-01 13:44:04.180807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.212 [2024-10-01 13:44:04.180852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.212 [2024-10-01 13:44:04.180872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.212 [2024-10-01 13:44:04.180906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.213 [2024-10-01 13:44:04.180938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.213 [2024-10-01 13:44:04.180955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.213 [2024-10-01 13:44:04.180970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.213 [2024-10-01 13:44:04.181002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.213 [2024-10-01 13:44:04.184325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.213 [2024-10-01 13:44:04.184459] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.213 [2024-10-01 13:44:04.184508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.213 [2024-10-01 13:44:04.184528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.213 [2024-10-01 13:44:04.184578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.213 [2024-10-01 13:44:04.184612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.213 [2024-10-01 13:44:04.184630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.213 [2024-10-01 13:44:04.184644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.213 [2024-10-01 13:44:04.184904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.213 [2024-10-01 13:44:04.191523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.213 [2024-10-01 13:44:04.191654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.213 [2024-10-01 13:44:04.191700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.213 [2024-10-01 13:44:04.191720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.213 [2024-10-01 13:44:04.191754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.213 [2024-10-01 13:44:04.191786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.213 [2024-10-01 13:44:04.191804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.213 [2024-10-01 13:44:04.191818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.213 [2024-10-01 13:44:04.191850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.213 [2024-10-01 13:44:04.194834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.213 [2024-10-01 13:44:04.194950] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.213 [2024-10-01 13:44:04.194989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.213 [2024-10-01 13:44:04.195008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.213 [2024-10-01 13:44:04.195041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.213 [2024-10-01 13:44:04.195074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.213 [2024-10-01 13:44:04.195091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.213 [2024-10-01 13:44:04.195106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.213 [2024-10-01 13:44:04.195137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.213 [2024-10-01 13:44:04.202465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.213 [2024-10-01 13:44:04.202602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.213 [2024-10-01 13:44:04.202651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.213 [2024-10-01 13:44:04.202671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.213 [2024-10-01 13:44:04.202706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.213 [2024-10-01 13:44:04.202760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.213 [2024-10-01 13:44:04.202780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.213 [2024-10-01 13:44:04.202794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.213 [2024-10-01 13:44:04.202826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.213 [2024-10-01 13:44:04.205694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.213 [2024-10-01 13:44:04.205809] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.213 [2024-10-01 13:44:04.205848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.213 [2024-10-01 13:44:04.205867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.213 [2024-10-01 13:44:04.205901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.213 [2024-10-01 13:44:04.205933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.213 [2024-10-01 13:44:04.205951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.213 [2024-10-01 13:44:04.205965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.213 [2024-10-01 13:44:04.205996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.213 [2024-10-01 13:44:04.212582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.213 [2024-10-01 13:44:04.212701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.213 [2024-10-01 13:44:04.212734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.213 [2024-10-01 13:44:04.212752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.213 [2024-10-01 13:44:04.212785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.213 [2024-10-01 13:44:04.212817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.213 [2024-10-01 13:44:04.212834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.213 [2024-10-01 13:44:04.212850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.213 [2024-10-01 13:44:04.212882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.213 [2024-10-01 13:44:04.216575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.213 [2024-10-01 13:44:04.216689] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.213 [2024-10-01 13:44:04.216735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.213 [2024-10-01 13:44:04.216756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.213 [2024-10-01 13:44:04.216789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.213 [2024-10-01 13:44:04.216822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.213 [2024-10-01 13:44:04.216840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.213 [2024-10-01 13:44:04.216854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.213 [2024-10-01 13:44:04.216908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.213 [2024-10-01 13:44:04.223093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.213 [2024-10-01 13:44:04.223211] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.213 [2024-10-01 13:44:04.223258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.213 [2024-10-01 13:44:04.223279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.213 [2024-10-01 13:44:04.223313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.213 [2024-10-01 13:44:04.223345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.213 [2024-10-01 13:44:04.223362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.213 [2024-10-01 13:44:04.223377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.213 [2024-10-01 13:44:04.223409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.213 [2024-10-01 13:44:04.226664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.213 [2024-10-01 13:44:04.226778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.213 [2024-10-01 13:44:04.226828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.213 [2024-10-01 13:44:04.226848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.213 [2024-10-01 13:44:04.226881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.213 [2024-10-01 13:44:04.226913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.213 [2024-10-01 13:44:04.226931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.213 [2024-10-01 13:44:04.226946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.213 [2024-10-01 13:44:04.227210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.213 [2024-10-01 13:44:04.233971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.213 [2024-10-01 13:44:04.234086] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.213 [2024-10-01 13:44:04.234119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.213 [2024-10-01 13:44:04.234137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.213 [2024-10-01 13:44:04.234170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.213 [2024-10-01 13:44:04.234202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.213 [2024-10-01 13:44:04.234219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.213 [2024-10-01 13:44:04.234234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.214 [2024-10-01 13:44:04.234265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.214 [2024-10-01 13:44:04.237421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.214 [2024-10-01 13:44:04.237551] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.214 [2024-10-01 13:44:04.237586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.214 [2024-10-01 13:44:04.237625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.214 [2024-10-01 13:44:04.237662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.214 [2024-10-01 13:44:04.237696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.214 [2024-10-01 13:44:04.237713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.214 [2024-10-01 13:44:04.237728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.214 [2024-10-01 13:44:04.237758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.214 [2024-10-01 13:44:04.245207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.214 [2024-10-01 13:44:04.245334] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.214 [2024-10-01 13:44:04.245374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.214 [2024-10-01 13:44:04.245393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.214 [2024-10-01 13:44:04.245427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.214 [2024-10-01 13:44:04.245460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.214 [2024-10-01 13:44:04.245477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.214 [2024-10-01 13:44:04.245492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.214 [2024-10-01 13:44:04.245524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.214 [2024-10-01 13:44:04.248561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.214 [2024-10-01 13:44:04.248682] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.214 [2024-10-01 13:44:04.248723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.214 [2024-10-01 13:44:04.248743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.214 [2024-10-01 13:44:04.248778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.214 [2024-10-01 13:44:04.248810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.214 [2024-10-01 13:44:04.248828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.214 [2024-10-01 13:44:04.248843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.214 [2024-10-01 13:44:04.248874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.214 [2024-10-01 13:44:04.255458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.214 [2024-10-01 13:44:04.255600] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.214 [2024-10-01 13:44:04.255648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.214 [2024-10-01 13:44:04.255669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.214 [2024-10-01 13:44:04.255703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.214 [2024-10-01 13:44:04.255736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.214 [2024-10-01 13:44:04.255778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.214 [2024-10-01 13:44:04.255794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.214 [2024-10-01 13:44:04.255828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.214 [2024-10-01 13:44:04.259639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.214 [2024-10-01 13:44:04.259755] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.214 [2024-10-01 13:44:04.259809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.214 [2024-10-01 13:44:04.259829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.214 [2024-10-01 13:44:04.259863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.214 [2024-10-01 13:44:04.259908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.214 [2024-10-01 13:44:04.259927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.214 [2024-10-01 13:44:04.259942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.214 [2024-10-01 13:44:04.259973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.214 [2024-10-01 13:44:04.266132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.214 [2024-10-01 13:44:04.266256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.214 [2024-10-01 13:44:04.266300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.214 [2024-10-01 13:44:04.266321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.214 [2024-10-01 13:44:04.266355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.214 [2024-10-01 13:44:04.266388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.214 [2024-10-01 13:44:04.266405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.214 [2024-10-01 13:44:04.266420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.214 [2024-10-01 13:44:04.266451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.214 [2024-10-01 13:44:04.269734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.214 [2024-10-01 13:44:04.269849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.214 [2024-10-01 13:44:04.269893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.214 [2024-10-01 13:44:04.269914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.214 [2024-10-01 13:44:04.269948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.214 [2024-10-01 13:44:04.269980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.214 [2024-10-01 13:44:04.269997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.214 [2024-10-01 13:44:04.270011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.214 [2024-10-01 13:44:04.270043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.214 [2024-10-01 13:44:04.276948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.214 [2024-10-01 13:44:04.277066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.214 [2024-10-01 13:44:04.277110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.214 [2024-10-01 13:44:04.277130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.214 [2024-10-01 13:44:04.277164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.214 [2024-10-01 13:44:04.277196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.214 [2024-10-01 13:44:04.277213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.214 [2024-10-01 13:44:04.277228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.214 [2024-10-01 13:44:04.277260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.214 [2024-10-01 13:44:04.280302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.214 [2024-10-01 13:44:04.280417] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.214 [2024-10-01 13:44:04.280466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.214 [2024-10-01 13:44:04.280486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.214 [2024-10-01 13:44:04.280519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.214 [2024-10-01 13:44:04.280569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.214 [2024-10-01 13:44:04.280589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.214 [2024-10-01 13:44:04.280603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.214 [2024-10-01 13:44:04.280634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.214 [2024-10-01 13:44:04.288088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.214 [2024-10-01 13:44:04.288206] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.214 [2024-10-01 13:44:04.288252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.214 [2024-10-01 13:44:04.288272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.214 [2024-10-01 13:44:04.288306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.214 [2024-10-01 13:44:04.288338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.214 [2024-10-01 13:44:04.288356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.214 [2024-10-01 13:44:04.288370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.214 [2024-10-01 13:44:04.288402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.214 [2024-10-01 13:44:04.291332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.214 [2024-10-01 13:44:04.291447] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.215 [2024-10-01 13:44:04.291495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.215 [2024-10-01 13:44:04.291516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.215 [2024-10-01 13:44:04.291588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.215 [2024-10-01 13:44:04.291623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.215 [2024-10-01 13:44:04.291641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.215 [2024-10-01 13:44:04.291656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.215 [2024-10-01 13:44:04.291688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.215 [2024-10-01 13:44:04.298184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.215 [2024-10-01 13:44:04.298303] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.215 [2024-10-01 13:44:04.298335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.215 [2024-10-01 13:44:04.298354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.215 [2024-10-01 13:44:04.298387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.215 [2024-10-01 13:44:04.298419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.215 [2024-10-01 13:44:04.298437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.215 [2024-10-01 13:44:04.298451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.215 [2024-10-01 13:44:04.298483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.215 [2024-10-01 13:44:04.302296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.215 [2024-10-01 13:44:04.302412] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.215 [2024-10-01 13:44:04.302463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.215 [2024-10-01 13:44:04.302483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.215 [2024-10-01 13:44:04.302517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.215 [2024-10-01 13:44:04.302568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.215 [2024-10-01 13:44:04.302589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.215 [2024-10-01 13:44:04.302603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.215 [2024-10-01 13:44:04.302635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.215 [2024-10-01 13:44:04.308833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.215 [2024-10-01 13:44:04.308951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.215 [2024-10-01 13:44:04.308998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.215 [2024-10-01 13:44:04.309018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.215 [2024-10-01 13:44:04.309052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.215 [2024-10-01 13:44:04.309085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.215 [2024-10-01 13:44:04.309102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.215 [2024-10-01 13:44:04.309136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.215 [2024-10-01 13:44:04.309171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.215 [2024-10-01 13:44:04.312394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.215 [2024-10-01 13:44:04.312509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.215 [2024-10-01 13:44:04.312567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.215 [2024-10-01 13:44:04.312590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.215 [2024-10-01 13:44:04.312625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.215 [2024-10-01 13:44:04.312658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.215 [2024-10-01 13:44:04.312675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.215 [2024-10-01 13:44:04.312690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.215 [2024-10-01 13:44:04.312722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.215 [2024-10-01 13:44:04.319782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.215 [2024-10-01 13:44:04.319964] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.215 [2024-10-01 13:44:04.320000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.215 [2024-10-01 13:44:04.320019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.215 [2024-10-01 13:44:04.320054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.215 [2024-10-01 13:44:04.320088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.215 [2024-10-01 13:44:04.320105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.215 [2024-10-01 13:44:04.320121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.215 [2024-10-01 13:44:04.320154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.215 [2024-10-01 13:44:04.323157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.215 [2024-10-01 13:44:04.323278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.215 [2024-10-01 13:44:04.323316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.215 [2024-10-01 13:44:04.323336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.215 [2024-10-01 13:44:04.323369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.215 [2024-10-01 13:44:04.323401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.215 [2024-10-01 13:44:04.323419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.215 [2024-10-01 13:44:04.323434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.215 [2024-10-01 13:44:04.323466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.215 [2024-10-01 13:44:04.330913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.215 [2024-10-01 13:44:04.331121] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.215 [2024-10-01 13:44:04.331158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.215 [2024-10-01 13:44:04.331177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.215 [2024-10-01 13:44:04.331213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.215 [2024-10-01 13:44:04.331247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.215 [2024-10-01 13:44:04.331265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.215 [2024-10-01 13:44:04.331281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.215 [2024-10-01 13:44:04.331314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.215 [2024-10-01 13:44:04.334307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.215 [2024-10-01 13:44:04.334425] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.215 [2024-10-01 13:44:04.334482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.215 [2024-10-01 13:44:04.334503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.215 [2024-10-01 13:44:04.334553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.215 [2024-10-01 13:44:04.334590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.215 [2024-10-01 13:44:04.334609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.215 [2024-10-01 13:44:04.334623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.215 [2024-10-01 13:44:04.334654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.215 [2024-10-01 13:44:04.341386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.215 [2024-10-01 13:44:04.341505] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.215 [2024-10-01 13:44:04.341552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.215 [2024-10-01 13:44:04.341574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.215 [2024-10-01 13:44:04.341608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.215 [2024-10-01 13:44:04.341641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.215 [2024-10-01 13:44:04.341659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.215 [2024-10-01 13:44:04.341673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.215 [2024-10-01 13:44:04.341706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.215 [2024-10-01 13:44:04.345516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.215 [2024-10-01 13:44:04.345639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.215 [2024-10-01 13:44:04.345693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.215 [2024-10-01 13:44:04.345713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.215 [2024-10-01 13:44:04.345746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.216 [2024-10-01 13:44:04.345796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.216 [2024-10-01 13:44:04.345815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.216 [2024-10-01 13:44:04.345830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.216 [2024-10-01 13:44:04.345860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.216 [2024-10-01 13:44:04.352264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.216 [2024-10-01 13:44:04.352389] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.216 [2024-10-01 13:44:04.352433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.216 [2024-10-01 13:44:04.352453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.216 [2024-10-01 13:44:04.352487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.216 [2024-10-01 13:44:04.352520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.216 [2024-10-01 13:44:04.352554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.216 [2024-10-01 13:44:04.352571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.216 [2024-10-01 13:44:04.352604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.216 [2024-10-01 13:44:04.355920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.216 [2024-10-01 13:44:04.356035] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.216 [2024-10-01 13:44:04.356078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.216 [2024-10-01 13:44:04.356106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.216 [2024-10-01 13:44:04.356140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.216 [2024-10-01 13:44:04.356173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.216 [2024-10-01 13:44:04.356190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.216 [2024-10-01 13:44:04.356205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.216 [2024-10-01 13:44:04.356237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.216 [2024-10-01 13:44:04.363409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.216 [2024-10-01 13:44:04.363526] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.216 [2024-10-01 13:44:04.363584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.216 [2024-10-01 13:44:04.363605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.216 [2024-10-01 13:44:04.363639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.216 [2024-10-01 13:44:04.363672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.216 [2024-10-01 13:44:04.363689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.216 [2024-10-01 13:44:04.363704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.216 [2024-10-01 13:44:04.363756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.216 [2024-10-01 13:44:04.366900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.216 [2024-10-01 13:44:04.367017] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.216 [2024-10-01 13:44:04.367064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.216 [2024-10-01 13:44:04.367084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.216 [2024-10-01 13:44:04.367118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.216 [2024-10-01 13:44:04.367150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.216 [2024-10-01 13:44:04.367168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.216 [2024-10-01 13:44:04.367182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.216 [2024-10-01 13:44:04.367214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.216 [2024-10-01 13:44:04.374676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.216 [2024-10-01 13:44:04.374795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.216 [2024-10-01 13:44:04.374840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.216 [2024-10-01 13:44:04.374861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.216 [2024-10-01 13:44:04.374895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.216 [2024-10-01 13:44:04.374927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.216 [2024-10-01 13:44:04.374945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.216 [2024-10-01 13:44:04.374960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.216 [2024-10-01 13:44:04.374992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.216 [2024-10-01 13:44:04.378065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.216 [2024-10-01 13:44:04.378190] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.216 [2024-10-01 13:44:04.378232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.216 [2024-10-01 13:44:04.378253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.216 [2024-10-01 13:44:04.378286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.216 [2024-10-01 13:44:04.378319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.216 [2024-10-01 13:44:04.378337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.216 [2024-10-01 13:44:04.378351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.216 [2024-10-01 13:44:04.378382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.216 [2024-10-01 13:44:04.385106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.216 [2024-10-01 13:44:04.385230] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.216 [2024-10-01 13:44:04.385281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.216 [2024-10-01 13:44:04.385320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.216 [2024-10-01 13:44:04.385356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.216 [2024-10-01 13:44:04.385389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.216 [2024-10-01 13:44:04.385407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.216 [2024-10-01 13:44:04.385422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.216 [2024-10-01 13:44:04.385455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.216 [2024-10-01 13:44:04.389314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.216 [2024-10-01 13:44:04.389434] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.216 [2024-10-01 13:44:04.389483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.216 [2024-10-01 13:44:04.389503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.216 [2024-10-01 13:44:04.389550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.216 [2024-10-01 13:44:04.389587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.216 [2024-10-01 13:44:04.389606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.216 [2024-10-01 13:44:04.389620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.216 [2024-10-01 13:44:04.389653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.216 [2024-10-01 13:44:04.396026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.216 [2024-10-01 13:44:04.396145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.216 [2024-10-01 13:44:04.396191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.216 [2024-10-01 13:44:04.396211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.216 [2024-10-01 13:44:04.396245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.216 [2024-10-01 13:44:04.396278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.217 [2024-10-01 13:44:04.396295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.217 [2024-10-01 13:44:04.396309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.217 [2024-10-01 13:44:04.396341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.217 [2024-10-01 13:44:04.399650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.217 [2024-10-01 13:44:04.399767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.217 [2024-10-01 13:44:04.399808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.217 [2024-10-01 13:44:04.399827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.217 [2024-10-01 13:44:04.399861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.217 [2024-10-01 13:44:04.399908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.217 [2024-10-01 13:44:04.399943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.217 [2024-10-01 13:44:04.399959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.217 [2024-10-01 13:44:04.399992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.217 [2024-10-01 13:44:04.406989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.217 [2024-10-01 13:44:04.407107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.217 [2024-10-01 13:44:04.407154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.217 [2024-10-01 13:44:04.407174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.217 [2024-10-01 13:44:04.407208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.217 [2024-10-01 13:44:04.407241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.217 [2024-10-01 13:44:04.407258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.217 [2024-10-01 13:44:04.407273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.217 [2024-10-01 13:44:04.407305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.217 [2024-10-01 13:44:04.410357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.217 [2024-10-01 13:44:04.410474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.217 [2024-10-01 13:44:04.410519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.217 [2024-10-01 13:44:04.410553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.217 [2024-10-01 13:44:04.410590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.217 [2024-10-01 13:44:04.410623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.217 [2024-10-01 13:44:04.410640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.217 [2024-10-01 13:44:04.410655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.217 [2024-10-01 13:44:04.410686] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.217 [2024-10-01 13:44:04.417957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.217 [2024-10-01 13:44:04.418077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.217 [2024-10-01 13:44:04.418120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.217 [2024-10-01 13:44:04.418141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.217 [2024-10-01 13:44:04.418174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.217 [2024-10-01 13:44:04.418207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.217 [2024-10-01 13:44:04.418224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.217 [2024-10-01 13:44:04.418238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.217 [2024-10-01 13:44:04.418270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.217 [2024-10-01 13:44:04.421180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.217 [2024-10-01 13:44:04.421297] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.217 [2024-10-01 13:44:04.421345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.217 [2024-10-01 13:44:04.421366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.217 [2024-10-01 13:44:04.421399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.217 [2024-10-01 13:44:04.421431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.217 [2024-10-01 13:44:04.421449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.217 [2024-10-01 13:44:04.421464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.217 [2024-10-01 13:44:04.421496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.217 [2024-10-01 13:44:04.428060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.217 [2024-10-01 13:44:04.428177] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.217 [2024-10-01 13:44:04.428227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.217 [2024-10-01 13:44:04.428247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.217 [2024-10-01 13:44:04.428281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.217 [2024-10-01 13:44:04.428322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.217 [2024-10-01 13:44:04.428339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.217 [2024-10-01 13:44:04.428354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.217 [2024-10-01 13:44:04.428645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.217 [2024-10-01 13:44:04.432075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.217 [2024-10-01 13:44:04.432190] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.217 [2024-10-01 13:44:04.432235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.217 [2024-10-01 13:44:04.432255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.217 [2024-10-01 13:44:04.432288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.217 [2024-10-01 13:44:04.432321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.217 [2024-10-01 13:44:04.432338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.217 [2024-10-01 13:44:04.432353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.217 [2024-10-01 13:44:04.432384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.217 [2024-10-01 13:44:04.438599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.217 [2024-10-01 13:44:04.438717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.217 [2024-10-01 13:44:04.438755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.217 [2024-10-01 13:44:04.438773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.217 [2024-10-01 13:44:04.438826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.217 [2024-10-01 13:44:04.438860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.217 [2024-10-01 13:44:04.438877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.217 [2024-10-01 13:44:04.438891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.217 [2024-10-01 13:44:04.438924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.217 [2024-10-01 13:44:04.442171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.217 [2024-10-01 13:44:04.442288] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.217 [2024-10-01 13:44:04.442331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.217 [2024-10-01 13:44:04.442351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.217 [2024-10-01 13:44:04.442385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.217 [2024-10-01 13:44:04.442417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.217 [2024-10-01 13:44:04.442435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.217 [2024-10-01 13:44:04.442450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.217 [2024-10-01 13:44:04.442482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.217 [2024-10-01 13:44:04.449453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.217 [2024-10-01 13:44:04.449583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.217 [2024-10-01 13:44:04.449617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.217 [2024-10-01 13:44:04.449635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.217 [2024-10-01 13:44:04.449670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.217 [2024-10-01 13:44:04.449703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.217 [2024-10-01 13:44:04.449720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.217 [2024-10-01 13:44:04.449734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.217 [2024-10-01 13:44:04.449766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.218 [2024-10-01 13:44:04.452787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.218 [2024-10-01 13:44:04.452902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.218 [2024-10-01 13:44:04.452934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.218 [2024-10-01 13:44:04.452952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.218 [2024-10-01 13:44:04.452985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.218 [2024-10-01 13:44:04.453016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.218 [2024-10-01 13:44:04.453034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.218 [2024-10-01 13:44:04.453064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.218 [2024-10-01 13:44:04.453099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.218 [2024-10-01 13:44:04.460379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.218 [2024-10-01 13:44:04.460497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.218 [2024-10-01 13:44:04.460555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.218 [2024-10-01 13:44:04.460577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.218 [2024-10-01 13:44:04.460612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.218 [2024-10-01 13:44:04.460645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.218 [2024-10-01 13:44:04.460663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.218 [2024-10-01 13:44:04.460677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.218 [2024-10-01 13:44:04.460709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.218 [2024-10-01 13:44:04.463608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.218 [2024-10-01 13:44:04.463721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.218 [2024-10-01 13:44:04.463765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.218 [2024-10-01 13:44:04.463785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.218 [2024-10-01 13:44:04.463818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.218 [2024-10-01 13:44:04.463850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.218 [2024-10-01 13:44:04.463868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.218 [2024-10-01 13:44:04.463893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.218 [2024-10-01 13:44:04.463927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.218 [2024-10-01 13:44:04.471187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.218 [2024-10-01 13:44:04.471305] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.218 [2024-10-01 13:44:04.471337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.218 [2024-10-01 13:44:04.471356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.218 [2024-10-01 13:44:04.471390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.218 [2024-10-01 13:44:04.471439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.218 [2024-10-01 13:44:04.471462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.218 [2024-10-01 13:44:04.471477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.218 [2024-10-01 13:44:04.471510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.218 [2024-10-01 13:44:04.473700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.218 [2024-10-01 13:44:04.473831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.218 [2024-10-01 13:44:04.473864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.218 [2024-10-01 13:44:04.473881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.218 [2024-10-01 13:44:04.473915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.218 [2024-10-01 13:44:04.473948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.218 [2024-10-01 13:44:04.473965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.218 [2024-10-01 13:44:04.473980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.218 [2024-10-01 13:44:04.475299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.218 [2024-10-01 13:44:04.482122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.218 [2024-10-01 13:44:04.482973] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.218 [2024-10-01 13:44:04.483019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.218 [2024-10-01 13:44:04.483040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.218 [2024-10-01 13:44:04.483215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.218 [2024-10-01 13:44:04.483309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.218 [2024-10-01 13:44:04.483337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.218 [2024-10-01 13:44:04.483352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.218 [2024-10-01 13:44:04.483387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.218 [2024-10-01 13:44:04.484783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.218 [2024-10-01 13:44:04.484897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.218 [2024-10-01 13:44:04.484929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.218 [2024-10-01 13:44:04.484947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.218 [2024-10-01 13:44:04.486015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.218 [2024-10-01 13:44:04.486668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.218 [2024-10-01 13:44:04.486707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.218 [2024-10-01 13:44:04.486725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.218 [2024-10-01 13:44:04.486825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.218 [2024-10-01 13:44:04.492465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.218 [2024-10-01 13:44:04.492598] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.218 [2024-10-01 13:44:04.492642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.218 [2024-10-01 13:44:04.492662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.218 [2024-10-01 13:44:04.492696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.218 [2024-10-01 13:44:04.492753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.218 [2024-10-01 13:44:04.492773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.218 [2024-10-01 13:44:04.492787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.218 [2024-10-01 13:44:04.492819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.218 [2024-10-01 13:44:04.494877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.218 [2024-10-01 13:44:04.494988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.218 [2024-10-01 13:44:04.495020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.218 [2024-10-01 13:44:04.495038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.218 [2024-10-01 13:44:04.496271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.218 [2024-10-01 13:44:04.496509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.218 [2024-10-01 13:44:04.496556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.218 [2024-10-01 13:44:04.496576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.218 [2024-10-01 13:44:04.497327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.218 [2024-10-01 13:44:04.502571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.218 [2024-10-01 13:44:04.502688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.218 [2024-10-01 13:44:04.502732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.218 [2024-10-01 13:44:04.502750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.218 [2024-10-01 13:44:04.504080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.218 [2024-10-01 13:44:04.505055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.218 [2024-10-01 13:44:04.505096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.218 [2024-10-01 13:44:04.505114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.218 [2024-10-01 13:44:04.505257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.218 [2024-10-01 13:44:04.505312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.218 [2024-10-01 13:44:04.505405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.219 [2024-10-01 13:44:04.505436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.219 [2024-10-01 13:44:04.505454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.219 [2024-10-01 13:44:04.505487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.219 [2024-10-01 13:44:04.505519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.219 [2024-10-01 13:44:04.505551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.219 [2024-10-01 13:44:04.505569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.219 [2024-10-01 13:44:04.505621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.219 [2024-10-01 13:44:04.513418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.219 [2024-10-01 13:44:04.513549] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.219 [2024-10-01 13:44:04.513582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.219 [2024-10-01 13:44:04.513599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.219 [2024-10-01 13:44:04.514665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.219 [2024-10-01 13:44:04.515302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.219 [2024-10-01 13:44:04.515341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.219 [2024-10-01 13:44:04.515359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.219 [2024-10-01 13:44:04.515455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.219 [2024-10-01 13:44:04.515518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.219 [2024-10-01 13:44:04.515630] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.219 [2024-10-01 13:44:04.515661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.219 [2024-10-01 13:44:04.515678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.219 [2024-10-01 13:44:04.515711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.219 [2024-10-01 13:44:04.515984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.219 [2024-10-01 13:44:04.516023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.219 [2024-10-01 13:44:04.516041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.219 [2024-10-01 13:44:04.516195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.219 [2024-10-01 13:44:04.523511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.219 [2024-10-01 13:44:04.523641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.219 [2024-10-01 13:44:04.523673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.219 [2024-10-01 13:44:04.523692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.219 [2024-10-01 13:44:04.523725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.219 [2024-10-01 13:44:04.524940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.219 [2024-10-01 13:44:04.524980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.219 [2024-10-01 13:44:04.524998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.219 [2024-10-01 13:44:04.525230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.219 [2024-10-01 13:44:04.526201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.219 [2024-10-01 13:44:04.526314] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.219 [2024-10-01 13:44:04.526355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.219 [2024-10-01 13:44:04.526390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.219 [2024-10-01 13:44:04.526426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.219 [2024-10-01 13:44:04.527514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.219 [2024-10-01 13:44:04.527568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.219 [2024-10-01 13:44:04.527586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.219 [2024-10-01 13:44:04.527806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.219 [2024-10-01 13:44:04.533803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.219 [2024-10-01 13:44:04.533920] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.219 [2024-10-01 13:44:04.533952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.219 [2024-10-01 13:44:04.533971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.219 [2024-10-01 13:44:04.534004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.219 [2024-10-01 13:44:04.534036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.219 [2024-10-01 13:44:04.534053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.219 [2024-10-01 13:44:04.534067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.219 [2024-10-01 13:44:04.534099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.219 [2024-10-01 13:44:04.536987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.219 [2024-10-01 13:44:04.537103] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.219 [2024-10-01 13:44:04.537147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.219 [2024-10-01 13:44:04.537167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.219 [2024-10-01 13:44:04.537201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.219 [2024-10-01 13:44:04.537233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.219 [2024-10-01 13:44:04.537251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.219 [2024-10-01 13:44:04.537266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.219 [2024-10-01 13:44:04.537297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.219 [2024-10-01 13:44:04.543913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.219 [2024-10-01 13:44:04.544030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.219 [2024-10-01 13:44:04.544062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.219 [2024-10-01 13:44:04.544081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.219 [2024-10-01 13:44:04.544113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.219 [2024-10-01 13:44:04.544146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.219 [2024-10-01 13:44:04.544181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.219 [2024-10-01 13:44:04.544197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.219 [2024-10-01 13:44:04.544231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.219 [2024-10-01 13:44:04.548059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.219 [2024-10-01 13:44:04.548175] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.219 [2024-10-01 13:44:04.548208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.219 [2024-10-01 13:44:04.548226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.219 [2024-10-01 13:44:04.548259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.219 [2024-10-01 13:44:04.548292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.219 [2024-10-01 13:44:04.548309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.219 [2024-10-01 13:44:04.548324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.219 [2024-10-01 13:44:04.548356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.219 [2024-10-01 13:44:04.554555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.219 [2024-10-01 13:44:04.554691] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.219 [2024-10-01 13:44:04.554743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.219 [2024-10-01 13:44:04.554764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.219 [2024-10-01 13:44:04.554798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.219 [2024-10-01 13:44:04.554831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.219 [2024-10-01 13:44:04.554849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.219 [2024-10-01 13:44:04.554863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.219 [2024-10-01 13:44:04.554896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.219 [2024-10-01 13:44:04.558149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.219 [2024-10-01 13:44:04.558264] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.219 [2024-10-01 13:44:04.558310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.219 [2024-10-01 13:44:04.558331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.219 [2024-10-01 13:44:04.558364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.220 [2024-10-01 13:44:04.558397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.220 [2024-10-01 13:44:04.558415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.220 [2024-10-01 13:44:04.558429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.220 [2024-10-01 13:44:04.558705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.220 [2024-10-01 13:44:04.565355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.220 [2024-10-01 13:44:04.565472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.220 [2024-10-01 13:44:04.565505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.220 [2024-10-01 13:44:04.565522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.220 [2024-10-01 13:44:04.565571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.220 [2024-10-01 13:44:04.565606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.220 [2024-10-01 13:44:04.565624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.220 [2024-10-01 13:44:04.565638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.220 [2024-10-01 13:44:04.565670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.220 [2024-10-01 13:44:04.568639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.220 [2024-10-01 13:44:04.568759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.220 [2024-10-01 13:44:04.568806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.220 [2024-10-01 13:44:04.568826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.220 [2024-10-01 13:44:04.568860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.220 [2024-10-01 13:44:04.568893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.220 [2024-10-01 13:44:04.568911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.220 [2024-10-01 13:44:04.568925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.220 [2024-10-01 13:44:04.568956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.220 [2024-10-01 13:44:04.576197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.220 [2024-10-01 13:44:04.576315] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.220 [2024-10-01 13:44:04.576363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.220 [2024-10-01 13:44:04.576383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.220 [2024-10-01 13:44:04.576417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.220 [2024-10-01 13:44:04.576450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.220 [2024-10-01 13:44:04.576467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.220 [2024-10-01 13:44:04.576482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.220 [2024-10-01 13:44:04.576514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.220 [2024-10-01 13:44:04.579400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.220 [2024-10-01 13:44:04.579514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.220 [2024-10-01 13:44:04.579569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.220 [2024-10-01 13:44:04.579591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.220 [2024-10-01 13:44:04.579645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.220 [2024-10-01 13:44:04.579679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.220 [2024-10-01 13:44:04.579696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.220 [2024-10-01 13:44:04.579710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.220 [2024-10-01 13:44:04.579742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.220 [2024-10-01 13:44:04.586291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.220 [2024-10-01 13:44:04.586408] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.220 [2024-10-01 13:44:04.586451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.220 [2024-10-01 13:44:04.586472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.220 [2024-10-01 13:44:04.586506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.220 [2024-10-01 13:44:04.586552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.220 [2024-10-01 13:44:04.586573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.220 [2024-10-01 13:44:04.586588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.220 [2024-10-01 13:44:04.586854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.220 [2024-10-01 13:44:04.590334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.220 [2024-10-01 13:44:04.590450] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.220 [2024-10-01 13:44:04.590482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.220 [2024-10-01 13:44:04.590500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.220 [2024-10-01 13:44:04.590546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.220 [2024-10-01 13:44:04.590583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.220 [2024-10-01 13:44:04.590601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.220 [2024-10-01 13:44:04.590615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.220 [2024-10-01 13:44:04.590647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.220 [2024-10-01 13:44:04.596862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.220 [2024-10-01 13:44:04.596981] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.220 [2024-10-01 13:44:04.597020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.220 [2024-10-01 13:44:04.597040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.220 [2024-10-01 13:44:04.597073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.220 [2024-10-01 13:44:04.597106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.220 [2024-10-01 13:44:04.597123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.220 [2024-10-01 13:44:04.597156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.220 [2024-10-01 13:44:04.597190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.220 [2024-10-01 13:44:04.600427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.220 [2024-10-01 13:44:04.600558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.220 [2024-10-01 13:44:04.600604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.220 [2024-10-01 13:44:04.600624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.220 [2024-10-01 13:44:04.600659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.220 [2024-10-01 13:44:04.600692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.220 [2024-10-01 13:44:04.600710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.220 [2024-10-01 13:44:04.600725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.220 [2024-10-01 13:44:04.600985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.220 [2024-10-01 13:44:04.607648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.220 [2024-10-01 13:44:04.607765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.220 [2024-10-01 13:44:04.607811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.220 [2024-10-01 13:44:04.607831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.220 [2024-10-01 13:44:04.607865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.220 [2024-10-01 13:44:04.607911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.220 [2024-10-01 13:44:04.607936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.221 [2024-10-01 13:44:04.607950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.221 [2024-10-01 13:44:04.607982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.221 [2024-10-01 13:44:04.610923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.221 [2024-10-01 13:44:04.611038] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.221 [2024-10-01 13:44:04.611082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.221 [2024-10-01 13:44:04.611102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.221 [2024-10-01 13:44:04.611136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.221 [2024-10-01 13:44:04.611168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.221 [2024-10-01 13:44:04.611186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.221 [2024-10-01 13:44:04.611200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.221 [2024-10-01 13:44:04.611232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.221 [2024-10-01 13:44:04.618509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.221 [2024-10-01 13:44:04.618661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.221 [2024-10-01 13:44:04.618712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.221 [2024-10-01 13:44:04.618733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.221 [2024-10-01 13:44:04.618767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.221 [2024-10-01 13:44:04.618800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.221 [2024-10-01 13:44:04.618817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.221 [2024-10-01 13:44:04.618832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.221 [2024-10-01 13:44:04.618865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.221 [2024-10-01 13:44:04.621754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.221 [2024-10-01 13:44:04.621868] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.221 [2024-10-01 13:44:04.621912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.221 [2024-10-01 13:44:04.621933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.221 [2024-10-01 13:44:04.621966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.221 [2024-10-01 13:44:04.621998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.221 [2024-10-01 13:44:04.622016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.221 [2024-10-01 13:44:04.622030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.221 [2024-10-01 13:44:04.622061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.221 [2024-10-01 13:44:04.628640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.221 [2024-10-01 13:44:04.628761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.221 [2024-10-01 13:44:04.628800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.221 [2024-10-01 13:44:04.628818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.221 [2024-10-01 13:44:04.628851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.221 [2024-10-01 13:44:04.628884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.221 [2024-10-01 13:44:04.628901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.221 [2024-10-01 13:44:04.628915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.221 [2024-10-01 13:44:04.629179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.221 [2024-10-01 13:44:04.632666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.221 [2024-10-01 13:44:04.632782] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.221 [2024-10-01 13:44:04.632820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.221 [2024-10-01 13:44:04.632840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.221 [2024-10-01 13:44:04.632873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.221 [2024-10-01 13:44:04.632923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.221 [2024-10-01 13:44:04.632943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.221 [2024-10-01 13:44:04.632958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.221 [2024-10-01 13:44:04.632990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.221 [2024-10-01 13:44:04.639120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.221 [2024-10-01 13:44:04.639239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.221 [2024-10-01 13:44:04.639281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.221 [2024-10-01 13:44:04.639301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.221 [2024-10-01 13:44:04.639335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.221 [2024-10-01 13:44:04.639367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.221 [2024-10-01 13:44:04.639385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.221 [2024-10-01 13:44:04.639399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.221 [2024-10-01 13:44:04.639431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.221 [2024-10-01 13:44:04.642759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.221 [2024-10-01 13:44:04.642872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.221 [2024-10-01 13:44:04.642916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.221 [2024-10-01 13:44:04.642937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.221 [2024-10-01 13:44:04.642971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.221 [2024-10-01 13:44:04.643002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.221 [2024-10-01 13:44:04.643020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.221 [2024-10-01 13:44:04.643034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.221 [2024-10-01 13:44:04.643294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.221 [2024-10-01 13:44:04.649924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.221 [2024-10-01 13:44:04.650041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.221 [2024-10-01 13:44:04.650084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.221 [2024-10-01 13:44:04.650104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.221 [2024-10-01 13:44:04.650138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.221 [2024-10-01 13:44:04.650170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.221 [2024-10-01 13:44:04.650187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.221 [2024-10-01 13:44:04.650202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.221 [2024-10-01 13:44:04.650253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.221 [2024-10-01 13:44:04.653208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.221 [2024-10-01 13:44:04.653325] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.221 [2024-10-01 13:44:04.653370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.221 [2024-10-01 13:44:04.653390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.221 [2024-10-01 13:44:04.653423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.221 [2024-10-01 13:44:04.653454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.221 [2024-10-01 13:44:04.653471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.221 [2024-10-01 13:44:04.653486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.221 [2024-10-01 13:44:04.653517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.221 [2024-10-01 13:44:04.660811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.221 [2024-10-01 13:44:04.660929] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.221 [2024-10-01 13:44:04.660976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.221 [2024-10-01 13:44:04.660996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.221 [2024-10-01 13:44:04.661029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.221 [2024-10-01 13:44:04.661061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.221 [2024-10-01 13:44:04.661078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.221 [2024-10-01 13:44:04.661093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.222 [2024-10-01 13:44:04.661124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.222 [2024-10-01 13:44:04.663298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.222 [2024-10-01 13:44:04.663409] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.222 [2024-10-01 13:44:04.663454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.222 [2024-10-01 13:44:04.663474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.222 [2024-10-01 13:44:04.663507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.222 [2024-10-01 13:44:04.663556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.222 [2024-10-01 13:44:04.663577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.222 [2024-10-01 13:44:04.663592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.222 [2024-10-01 13:44:04.663624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.222 8611.58 IOPS, 33.64 MiB/s [2024-10-01 13:44:04.672368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.222 [2024-10-01 13:44:04.672488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.222 [2024-10-01 13:44:04.672557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.222 [2024-10-01 13:44:04.672581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.222 [2024-10-01 13:44:04.672616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.222 [2024-10-01 13:44:04.672649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.222 [2024-10-01 13:44:04.672667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.222 [2024-10-01 13:44:04.672681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.222 [2024-10-01 13:44:04.672714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.222 [2024-10-01 13:44:04.673385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.222 [2024-10-01 13:44:04.673497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.222 [2024-10-01 13:44:04.673552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.222 [2024-10-01 13:44:04.673574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.222 [2024-10-01 13:44:04.673608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.222 [2024-10-01 13:44:04.673640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.222 [2024-10-01 13:44:04.673658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.222 [2024-10-01 13:44:04.673672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.222 [2024-10-01 13:44:04.673703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.222 [2024-10-01 13:44:04.683333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.222 [2024-10-01 13:44:04.683629] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.222 [2024-10-01 13:44:04.683674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.222 [2024-10-01 13:44:04.683695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.222 [2024-10-01 13:44:04.683790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.222 [2024-10-01 13:44:04.683836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.222 [2024-10-01 13:44:04.683868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.222 [2024-10-01 13:44:04.683899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.222 [2024-10-01 13:44:04.683915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.222 [2024-10-01 13:44:04.683947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.222 [2024-10-01 13:44:04.684010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.222 [2024-10-01 13:44:04.684038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.222 [2024-10-01 13:44:04.684056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.222 [2024-10-01 13:44:04.685148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.222 [2024-10-01 13:44:04.685389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.222 [2024-10-01 13:44:04.685425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.222 [2024-10-01 13:44:04.685443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.222 [2024-10-01 13:44:04.686509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.222 [2024-10-01 13:44:04.693430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.222 [2024-10-01 13:44:04.693563] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.222 [2024-10-01 13:44:04.693606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.222 [2024-10-01 13:44:04.693626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.222 [2024-10-01 13:44:04.694554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.222 [2024-10-01 13:44:04.694789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.222 [2024-10-01 13:44:04.694826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.222 [2024-10-01 13:44:04.694844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.222 [2024-10-01 13:44:04.694890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.222 [2024-10-01 13:44:04.694916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.222 [2024-10-01 13:44:04.694997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.222 [2024-10-01 13:44:04.695029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.222 [2024-10-01 13:44:04.695047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.222 [2024-10-01 13:44:04.695080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.222 [2024-10-01 13:44:04.695112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.222 [2024-10-01 13:44:04.695130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.222 [2024-10-01 13:44:04.695144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.222 [2024-10-01 13:44:04.695175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.222 [2024-10-01 13:44:04.705756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.222 [2024-10-01 13:44:04.705834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.222 [2024-10-01 13:44:04.705917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.222 [2024-10-01 13:44:04.705947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.222 [2024-10-01 13:44:04.705965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.222 [2024-10-01 13:44:04.706033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.222 [2024-10-01 13:44:04.706061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.222 [2024-10-01 13:44:04.706078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.222 [2024-10-01 13:44:04.706096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.222 [2024-10-01 13:44:04.706151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.222 [2024-10-01 13:44:04.706173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.222 [2024-10-01 13:44:04.706187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.222 [2024-10-01 13:44:04.706201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.222 [2024-10-01 13:44:04.706233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.222 [2024-10-01 13:44:04.706252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.222 [2024-10-01 13:44:04.706267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.222 [2024-10-01 13:44:04.706281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.222 [2024-10-01 13:44:04.706310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.222 [2024-10-01 13:44:04.715873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.222 [2024-10-01 13:44:04.716014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.222 [2024-10-01 13:44:04.716057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.222 [2024-10-01 13:44:04.716079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.222 [2024-10-01 13:44:04.716115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.222 [2024-10-01 13:44:04.716150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.222 [2024-10-01 13:44:04.716223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.222 [2024-10-01 13:44:04.716252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.222 [2024-10-01 13:44:04.716269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.222 [2024-10-01 13:44:04.716284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.223 [2024-10-01 13:44:04.716297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.223 [2024-10-01 13:44:04.716311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.223 [2024-10-01 13:44:04.716590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.223 [2024-10-01 13:44:04.716624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.223 [2024-10-01 13:44:04.716768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.223 [2024-10-01 13:44:04.716805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.223 [2024-10-01 13:44:04.716822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.223 [2024-10-01 13:44:04.716933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.223 [2024-10-01 13:44:04.726590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.223 [2024-10-01 13:44:04.726638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.223 [2024-10-01 13:44:04.726737] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.223 [2024-10-01 13:44:04.726792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.223 [2024-10-01 13:44:04.726814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.223 [2024-10-01 13:44:04.726867] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.223 [2024-10-01 13:44:04.726893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.223 [2024-10-01 13:44:04.726909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.223 [2024-10-01 13:44:04.726943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.223 [2024-10-01 13:44:04.726967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.223 [2024-10-01 13:44:04.728064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.223 [2024-10-01 13:44:04.728105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.223 [2024-10-01 13:44:04.728124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.223 [2024-10-01 13:44:04.728142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.223 [2024-10-01 13:44:04.728157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.223 [2024-10-01 13:44:04.728170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.223 [2024-10-01 13:44:04.728400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.223 [2024-10-01 13:44:04.728428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.223 [2024-10-01 13:44:04.737382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.223 [2024-10-01 13:44:04.737432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.223 [2024-10-01 13:44:04.737530] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.223 [2024-10-01 13:44:04.737576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.223 [2024-10-01 13:44:04.737595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.223 [2024-10-01 13:44:04.737647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.223 [2024-10-01 13:44:04.737672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.223 [2024-10-01 13:44:04.737689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.223 [2024-10-01 13:44:04.737723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.223 [2024-10-01 13:44:04.737746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.223 [2024-10-01 13:44:04.737772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.223 [2024-10-01 13:44:04.737789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.223 [2024-10-01 13:44:04.737803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.223 [2024-10-01 13:44:04.737820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.223 [2024-10-01 13:44:04.737835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.223 [2024-10-01 13:44:04.737864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.223 [2024-10-01 13:44:04.737898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.223 [2024-10-01 13:44:04.737918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.223 [2024-10-01 13:44:04.748259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.223 [2024-10-01 13:44:04.748310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.223 [2024-10-01 13:44:04.748409] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.223 [2024-10-01 13:44:04.748448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.223 [2024-10-01 13:44:04.748467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.223 [2024-10-01 13:44:04.748518] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.223 [2024-10-01 13:44:04.748558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.223 [2024-10-01 13:44:04.748578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.223 [2024-10-01 13:44:04.748612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.223 [2024-10-01 13:44:04.748635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.223 [2024-10-01 13:44:04.748661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.223 [2024-10-01 13:44:04.748679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.223 [2024-10-01 13:44:04.748693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.223 [2024-10-01 13:44:04.748711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.223 [2024-10-01 13:44:04.748726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.223 [2024-10-01 13:44:04.748739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.223 [2024-10-01 13:44:04.748771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.223 [2024-10-01 13:44:04.748790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.223 [2024-10-01 13:44:04.758393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.223 [2024-10-01 13:44:04.758469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.223 [2024-10-01 13:44:04.758567] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.223 [2024-10-01 13:44:04.758604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.223 [2024-10-01 13:44:04.758623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.223 [2024-10-01 13:44:04.758693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.223 [2024-10-01 13:44:04.758721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.223 [2024-10-01 13:44:04.758738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.223 [2024-10-01 13:44:04.758757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.223 [2024-10-01 13:44:04.759021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.223 [2024-10-01 13:44:04.759084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.223 [2024-10-01 13:44:04.759103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.223 [2024-10-01 13:44:04.759117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.223 [2024-10-01 13:44:04.759254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.223 [2024-10-01 13:44:04.759279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.223 [2024-10-01 13:44:04.759294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.223 [2024-10-01 13:44:04.759309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.223 [2024-10-01 13:44:04.759418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.223 [2024-10-01 13:44:04.768966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.223 [2024-10-01 13:44:04.769016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.223 [2024-10-01 13:44:04.769114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.223 [2024-10-01 13:44:04.769146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.223 [2024-10-01 13:44:04.769165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.223 [2024-10-01 13:44:04.769214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.223 [2024-10-01 13:44:04.769239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.223 [2024-10-01 13:44:04.769256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.223 [2024-10-01 13:44:04.769289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.223 [2024-10-01 13:44:04.769312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.223 [2024-10-01 13:44:04.770396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.223 [2024-10-01 13:44:04.770436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.223 [2024-10-01 13:44:04.770455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.224 [2024-10-01 13:44:04.770472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.224 [2024-10-01 13:44:04.770488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.224 [2024-10-01 13:44:04.770501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.224 [2024-10-01 13:44:04.770742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.224 [2024-10-01 13:44:04.770770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.224 [2024-10-01 13:44:04.779758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.224 [2024-10-01 13:44:04.779807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.224 [2024-10-01 13:44:04.779915] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.224 [2024-10-01 13:44:04.779948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.224 [2024-10-01 13:44:04.779983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.224 [2024-10-01 13:44:04.780038] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.224 [2024-10-01 13:44:04.780063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.224 [2024-10-01 13:44:04.780080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.224 [2024-10-01 13:44:04.780116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.224 [2024-10-01 13:44:04.780150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.224 [2024-10-01 13:44:04.780177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.224 [2024-10-01 13:44:04.780195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.224 [2024-10-01 13:44:04.780210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.224 [2024-10-01 13:44:04.780226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.224 [2024-10-01 13:44:04.780241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.224 [2024-10-01 13:44:04.780255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.224 [2024-10-01 13:44:04.780286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.224 [2024-10-01 13:44:04.780306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.224 [2024-10-01 13:44:04.790646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.224 [2024-10-01 13:44:04.790697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.224 [2024-10-01 13:44:04.790795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.224 [2024-10-01 13:44:04.790832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.224 [2024-10-01 13:44:04.790852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.224 [2024-10-01 13:44:04.790902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.224 [2024-10-01 13:44:04.790927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.224 [2024-10-01 13:44:04.790944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.224 [2024-10-01 13:44:04.790977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.224 [2024-10-01 13:44:04.791000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.224 [2024-10-01 13:44:04.791027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.224 [2024-10-01 13:44:04.791044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.224 [2024-10-01 13:44:04.791059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.224 [2024-10-01 13:44:04.791076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.224 [2024-10-01 13:44:04.791091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.224 [2024-10-01 13:44:04.791104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.224 [2024-10-01 13:44:04.791155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.224 [2024-10-01 13:44:04.791177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.224 [2024-10-01 13:44:04.800782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.224 [2024-10-01 13:44:04.800833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.224 [2024-10-01 13:44:04.800931] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.224 [2024-10-01 13:44:04.800963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.224 [2024-10-01 13:44:04.800982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.224 [2024-10-01 13:44:04.801031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.224 [2024-10-01 13:44:04.801056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.224 [2024-10-01 13:44:04.801072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.224 [2024-10-01 13:44:04.801334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.224 [2024-10-01 13:44:04.801378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.224 [2024-10-01 13:44:04.801523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.224 [2024-10-01 13:44:04.801571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.224 [2024-10-01 13:44:04.801589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.224 [2024-10-01 13:44:04.801607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.224 [2024-10-01 13:44:04.801622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.224 [2024-10-01 13:44:04.801636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.224 [2024-10-01 13:44:04.801748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.224 [2024-10-01 13:44:04.801770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.224 [2024-10-01 13:44:04.811237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.224 [2024-10-01 13:44:04.811287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.224 [2024-10-01 13:44:04.811385] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.224 [2024-10-01 13:44:04.811423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.224 [2024-10-01 13:44:04.811442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.224 [2024-10-01 13:44:04.811493] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.224 [2024-10-01 13:44:04.811517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.224 [2024-10-01 13:44:04.811550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.224 [2024-10-01 13:44:04.811589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.224 [2024-10-01 13:44:04.811613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.224 [2024-10-01 13:44:04.812705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.224 [2024-10-01 13:44:04.812760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.224 [2024-10-01 13:44:04.812779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.224 [2024-10-01 13:44:04.812797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.224 [2024-10-01 13:44:04.812813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.224 [2024-10-01 13:44:04.812826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.224 [2024-10-01 13:44:04.813046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.224 [2024-10-01 13:44:04.813074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.224 [2024-10-01 13:44:04.822002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.224 [2024-10-01 13:44:04.822052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.224 [2024-10-01 13:44:04.822149] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.224 [2024-10-01 13:44:04.822181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.224 [2024-10-01 13:44:04.822199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.224 [2024-10-01 13:44:04.822249] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.224 [2024-10-01 13:44:04.822274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.224 [2024-10-01 13:44:04.822290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.224 [2024-10-01 13:44:04.822324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.224 [2024-10-01 13:44:04.822348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.224 [2024-10-01 13:44:04.822375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.224 [2024-10-01 13:44:04.822392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.224 [2024-10-01 13:44:04.822407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.224 [2024-10-01 13:44:04.822424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.224 [2024-10-01 13:44:04.822440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.224 [2024-10-01 13:44:04.822453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.225 [2024-10-01 13:44:04.822485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.225 [2024-10-01 13:44:04.822504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.225 [2024-10-01 13:44:04.832956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.225 [2024-10-01 13:44:04.833006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.225 [2024-10-01 13:44:04.833104] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.225 [2024-10-01 13:44:04.833136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.225 [2024-10-01 13:44:04.833154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.225 [2024-10-01 13:44:04.833224] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.225 [2024-10-01 13:44:04.833251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.225 [2024-10-01 13:44:04.833268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.225 [2024-10-01 13:44:04.833302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.225 [2024-10-01 13:44:04.833326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.225 [2024-10-01 13:44:04.833352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.225 [2024-10-01 13:44:04.833370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.225 [2024-10-01 13:44:04.833384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.225 [2024-10-01 13:44:04.833401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.225 [2024-10-01 13:44:04.833416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.225 [2024-10-01 13:44:04.833429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.225 [2024-10-01 13:44:04.833461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.225 [2024-10-01 13:44:04.833480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.225 [2024-10-01 13:44:04.843089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.225 [2024-10-01 13:44:04.843164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.225 [2024-10-01 13:44:04.843248] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.225 [2024-10-01 13:44:04.843278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.225 [2024-10-01 13:44:04.843296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.225 [2024-10-01 13:44:04.843363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.225 [2024-10-01 13:44:04.843391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.225 [2024-10-01 13:44:04.843408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.225 [2024-10-01 13:44:04.843426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.225 [2024-10-01 13:44:04.843707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.225 [2024-10-01 13:44:04.843748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.225 [2024-10-01 13:44:04.843766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.225 [2024-10-01 13:44:04.843781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.225 [2024-10-01 13:44:04.843938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.225 [2024-10-01 13:44:04.843966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.225 [2024-10-01 13:44:04.843981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.225 [2024-10-01 13:44:04.843994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.225 [2024-10-01 13:44:04.844104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.225 [2024-10-01 13:44:04.853553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.225 [2024-10-01 13:44:04.853603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.225 [2024-10-01 13:44:04.853703] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.225 [2024-10-01 13:44:04.853741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.225 [2024-10-01 13:44:04.853761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.225 [2024-10-01 13:44:04.853813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.225 [2024-10-01 13:44:04.853838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.225 [2024-10-01 13:44:04.853855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.225 [2024-10-01 13:44:04.853889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.225 [2024-10-01 13:44:04.853913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.225 [2024-10-01 13:44:04.854995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.225 [2024-10-01 13:44:04.855031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.225 [2024-10-01 13:44:04.855048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.225 [2024-10-01 13:44:04.855065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.225 [2024-10-01 13:44:04.855080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.225 [2024-10-01 13:44:04.855094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.225 [2024-10-01 13:44:04.855312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.225 [2024-10-01 13:44:04.855339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.225 [2024-10-01 13:44:04.864359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.225 [2024-10-01 13:44:04.864408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.225 [2024-10-01 13:44:04.864507] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.225 [2024-10-01 13:44:04.864554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.225 [2024-10-01 13:44:04.864575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.225 [2024-10-01 13:44:04.864627] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.225 [2024-10-01 13:44:04.864652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.225 [2024-10-01 13:44:04.864669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.225 [2024-10-01 13:44:04.864702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.225 [2024-10-01 13:44:04.864725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.225 [2024-10-01 13:44:04.864752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.225 [2024-10-01 13:44:04.864771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.225 [2024-10-01 13:44:04.864803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.225 [2024-10-01 13:44:04.864821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.225 [2024-10-01 13:44:04.864837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.225 [2024-10-01 13:44:04.864850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.225 [2024-10-01 13:44:04.864884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.225 [2024-10-01 13:44:04.864904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.225 [2024-10-01 13:44:04.875259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.225 [2024-10-01 13:44:04.875313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.225 [2024-10-01 13:44:04.875413] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.225 [2024-10-01 13:44:04.875451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.225 [2024-10-01 13:44:04.875472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.225 [2024-10-01 13:44:04.875527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.225 [2024-10-01 13:44:04.875569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.226 [2024-10-01 13:44:04.875587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.226 [2024-10-01 13:44:04.875622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.226 [2024-10-01 13:44:04.875646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.226 [2024-10-01 13:44:04.875673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.226 [2024-10-01 13:44:04.875690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.226 [2024-10-01 13:44:04.875704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.226 [2024-10-01 13:44:04.875722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.226 [2024-10-01 13:44:04.875737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.226 [2024-10-01 13:44:04.875751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.226 [2024-10-01 13:44:04.875782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.226 [2024-10-01 13:44:04.875802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.226 [2024-10-01 13:44:04.885393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.226 [2024-10-01 13:44:04.885469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.226 [2024-10-01 13:44:04.885566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.226 [2024-10-01 13:44:04.885611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.226 [2024-10-01 13:44:04.885632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.226 [2024-10-01 13:44:04.885702] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.226 [2024-10-01 13:44:04.885730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.226 [2024-10-01 13:44:04.885765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.226 [2024-10-01 13:44:04.885786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.226 [2024-10-01 13:44:04.886051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.226 [2024-10-01 13:44:04.886091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.226 [2024-10-01 13:44:04.886109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.226 [2024-10-01 13:44:04.886124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.226 [2024-10-01 13:44:04.886270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.226 [2024-10-01 13:44:04.886302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.226 [2024-10-01 13:44:04.886319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.226 [2024-10-01 13:44:04.886333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.226 [2024-10-01 13:44:04.886444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.226 [2024-10-01 13:44:04.895955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.226 [2024-10-01 13:44:04.896005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.226 [2024-10-01 13:44:04.896104] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.226 [2024-10-01 13:44:04.896137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.226 [2024-10-01 13:44:04.896155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.226 [2024-10-01 13:44:04.896204] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.226 [2024-10-01 13:44:04.896229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.226 [2024-10-01 13:44:04.896245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.226 [2024-10-01 13:44:04.896279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.226 [2024-10-01 13:44:04.896302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.226 [2024-10-01 13:44:04.897386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.226 [2024-10-01 13:44:04.897427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.226 [2024-10-01 13:44:04.897445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.226 [2024-10-01 13:44:04.897463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.226 [2024-10-01 13:44:04.897478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.226 [2024-10-01 13:44:04.897491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.226 [2024-10-01 13:44:04.897733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.226 [2024-10-01 13:44:04.897761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.226 [2024-10-01 13:44:04.906732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.226 [2024-10-01 13:44:04.906797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.226 [2024-10-01 13:44:04.906897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.226 [2024-10-01 13:44:04.906931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.226 [2024-10-01 13:44:04.906949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.226 [2024-10-01 13:44:04.906999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.226 [2024-10-01 13:44:04.907023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.226 [2024-10-01 13:44:04.907040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.226 [2024-10-01 13:44:04.907073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.226 [2024-10-01 13:44:04.907097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.226 [2024-10-01 13:44:04.907123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.226 [2024-10-01 13:44:04.907141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.226 [2024-10-01 13:44:04.907155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.226 [2024-10-01 13:44:04.907173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.226 [2024-10-01 13:44:04.907187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.226 [2024-10-01 13:44:04.907201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.226 [2024-10-01 13:44:04.907232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.226 [2024-10-01 13:44:04.907251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.226 [2024-10-01 13:44:04.917648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.226 [2024-10-01 13:44:04.917699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.226 [2024-10-01 13:44:04.917797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.226 [2024-10-01 13:44:04.917829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.226 [2024-10-01 13:44:04.917847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.226 [2024-10-01 13:44:04.917901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.226 [2024-10-01 13:44:04.917927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.226 [2024-10-01 13:44:04.917943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.226 [2024-10-01 13:44:04.917976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.226 [2024-10-01 13:44:04.917999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.226 [2024-10-01 13:44:04.918026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.226 [2024-10-01 13:44:04.918043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.226 [2024-10-01 13:44:04.918057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.226 [2024-10-01 13:44:04.918092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.226 [2024-10-01 13:44:04.918110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.226 [2024-10-01 13:44:04.918124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.226 [2024-10-01 13:44:04.918156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.226 [2024-10-01 13:44:04.918176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.226 [2024-10-01 13:44:04.927778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.226 [2024-10-01 13:44:04.927855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.226 [2024-10-01 13:44:04.927952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.226 [2024-10-01 13:44:04.927984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.226 [2024-10-01 13:44:04.928002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.226 [2024-10-01 13:44:04.928076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.226 [2024-10-01 13:44:04.928103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.226 [2024-10-01 13:44:04.928120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.226 [2024-10-01 13:44:04.928139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.226 [2024-10-01 13:44:04.928403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.227 [2024-10-01 13:44:04.928443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.227 [2024-10-01 13:44:04.928461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.227 [2024-10-01 13:44:04.928476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.227 [2024-10-01 13:44:04.928636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.227 [2024-10-01 13:44:04.928662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.227 [2024-10-01 13:44:04.928677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.227 [2024-10-01 13:44:04.928692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.227 [2024-10-01 13:44:04.928801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.227 [2024-10-01 13:44:04.938287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.227 [2024-10-01 13:44:04.938336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.227 [2024-10-01 13:44:04.938436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.227 [2024-10-01 13:44:04.938468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.227 [2024-10-01 13:44:04.938486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.227 [2024-10-01 13:44:04.938550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.227 [2024-10-01 13:44:04.938578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.227 [2024-10-01 13:44:04.938594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.227 [2024-10-01 13:44:04.938649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.227 [2024-10-01 13:44:04.938673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.227 [2024-10-01 13:44:04.939763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.227 [2024-10-01 13:44:04.939803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.227 [2024-10-01 13:44:04.939829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.227 [2024-10-01 13:44:04.939848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.227 [2024-10-01 13:44:04.939863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.227 [2024-10-01 13:44:04.939886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.227 [2024-10-01 13:44:04.940130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.227 [2024-10-01 13:44:04.940160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.227 [2024-10-01 13:44:04.949096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.227 [2024-10-01 13:44:04.949146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.227 [2024-10-01 13:44:04.949245] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.227 [2024-10-01 13:44:04.949282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.227 [2024-10-01 13:44:04.949302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.227 [2024-10-01 13:44:04.949352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.227 [2024-10-01 13:44:04.949377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.227 [2024-10-01 13:44:04.949393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.227 [2024-10-01 13:44:04.949427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.227 [2024-10-01 13:44:04.949450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.227 [2024-10-01 13:44:04.949476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.227 [2024-10-01 13:44:04.949500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.227 [2024-10-01 13:44:04.949515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.227 [2024-10-01 13:44:04.949532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.227 [2024-10-01 13:44:04.949564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.227 [2024-10-01 13:44:04.949579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.227 [2024-10-01 13:44:04.949612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.227 [2024-10-01 13:44:04.949631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.227 [2024-10-01 13:44:04.960045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.227 [2024-10-01 13:44:04.960096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.227 [2024-10-01 13:44:04.960228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.227 [2024-10-01 13:44:04.960260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.227 [2024-10-01 13:44:04.960279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.227 [2024-10-01 13:44:04.960329] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.227 [2024-10-01 13:44:04.960354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.227 [2024-10-01 13:44:04.960370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.227 [2024-10-01 13:44:04.960404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.227 [2024-10-01 13:44:04.960427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.227 [2024-10-01 13:44:04.960454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.227 [2024-10-01 13:44:04.960472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.227 [2024-10-01 13:44:04.960486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.227 [2024-10-01 13:44:04.960503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.227 [2024-10-01 13:44:04.960518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.227 [2024-10-01 13:44:04.960531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.227 [2024-10-01 13:44:04.960586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.227 [2024-10-01 13:44:04.960606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.227 [2024-10-01 13:44:04.970206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.227 [2024-10-01 13:44:04.970283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.227 [2024-10-01 13:44:04.970365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.227 [2024-10-01 13:44:04.970396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.227 [2024-10-01 13:44:04.970414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.227 [2024-10-01 13:44:04.970482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.227 [2024-10-01 13:44:04.970510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.227 [2024-10-01 13:44:04.970526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.227 [2024-10-01 13:44:04.970562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.227 [2024-10-01 13:44:04.970829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.227 [2024-10-01 13:44:04.970869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.227 [2024-10-01 13:44:04.970887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.227 [2024-10-01 13:44:04.970902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.227 [2024-10-01 13:44:04.971048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.227 [2024-10-01 13:44:04.971088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.227 [2024-10-01 13:44:04.971106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.227 [2024-10-01 13:44:04.971121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.227 [2024-10-01 13:44:04.971232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.227 [2024-10-01 13:44:04.980757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.227 [2024-10-01 13:44:04.980807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.227 [2024-10-01 13:44:04.980906] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.227 [2024-10-01 13:44:04.980937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.227 [2024-10-01 13:44:04.980955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.227 [2024-10-01 13:44:04.981005] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.227 [2024-10-01 13:44:04.981030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.227 [2024-10-01 13:44:04.981046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.227 [2024-10-01 13:44:04.981079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.227 [2024-10-01 13:44:04.981102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.227 [2024-10-01 13:44:04.982185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.227 [2024-10-01 13:44:04.982225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.228 [2024-10-01 13:44:04.982244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.228 [2024-10-01 13:44:04.982262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.228 [2024-10-01 13:44:04.982277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.228 [2024-10-01 13:44:04.982290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.228 [2024-10-01 13:44:04.982508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.228 [2024-10-01 13:44:04.982552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.228 [2024-10-01 13:44:04.991517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.228 [2024-10-01 13:44:04.991583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.228 [2024-10-01 13:44:04.991685] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.228 [2024-10-01 13:44:04.991717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.228 [2024-10-01 13:44:04.991735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.228 [2024-10-01 13:44:04.991785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.228 [2024-10-01 13:44:04.991810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.228 [2024-10-01 13:44:04.991826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.228 [2024-10-01 13:44:04.991859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.228 [2024-10-01 13:44:04.991921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.228 [2024-10-01 13:44:04.991953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.228 [2024-10-01 13:44:04.991972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.228 [2024-10-01 13:44:04.991987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.228 [2024-10-01 13:44:04.992004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.228 [2024-10-01 13:44:04.992020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.228 [2024-10-01 13:44:04.992033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.228 [2024-10-01 13:44:04.992065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.228 [2024-10-01 13:44:04.992084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.228 [2024-10-01 13:44:05.002606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.228 [2024-10-01 13:44:05.002681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.228 [2024-10-01 13:44:05.002805] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.228 [2024-10-01 13:44:05.002840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.228 [2024-10-01 13:44:05.002859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.228 [2024-10-01 13:44:05.002910] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.228 [2024-10-01 13:44:05.002935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.228 [2024-10-01 13:44:05.002952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.228 [2024-10-01 13:44:05.002988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.228 [2024-10-01 13:44:05.003012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.228 [2024-10-01 13:44:05.003038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.228 [2024-10-01 13:44:05.003056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.228 [2024-10-01 13:44:05.003072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.228 [2024-10-01 13:44:05.003089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.228 [2024-10-01 13:44:05.003105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.228 [2024-10-01 13:44:05.003118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.228 [2024-10-01 13:44:05.003150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.228 [2024-10-01 13:44:05.003170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.228 [2024-10-01 13:44:05.012762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.228 [2024-10-01 13:44:05.012843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.228 [2024-10-01 13:44:05.012932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.228 [2024-10-01 13:44:05.013000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.228 [2024-10-01 13:44:05.013023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.228 [2024-10-01 13:44:05.013328] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.228 [2024-10-01 13:44:05.013371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.228 [2024-10-01 13:44:05.013391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.228 [2024-10-01 13:44:05.013411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.228 [2024-10-01 13:44:05.013574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.228 [2024-10-01 13:44:05.013604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.228 [2024-10-01 13:44:05.013619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.228 [2024-10-01 13:44:05.013634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.228 [2024-10-01 13:44:05.013745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.228 [2024-10-01 13:44:05.013768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.228 [2024-10-01 13:44:05.013783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.228 [2024-10-01 13:44:05.013797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.228 [2024-10-01 13:44:05.013835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.228 [2024-10-01 13:44:05.023198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.228 [2024-10-01 13:44:05.023249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.228 [2024-10-01 13:44:05.023347] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.228 [2024-10-01 13:44:05.023385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.228 [2024-10-01 13:44:05.023405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.228 [2024-10-01 13:44:05.023456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.228 [2024-10-01 13:44:05.023481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.228 [2024-10-01 13:44:05.023498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.228 [2024-10-01 13:44:05.023532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.228 [2024-10-01 13:44:05.023574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.228 [2024-10-01 13:44:05.024673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.228 [2024-10-01 13:44:05.024713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.228 [2024-10-01 13:44:05.024731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.228 [2024-10-01 13:44:05.024749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.228 [2024-10-01 13:44:05.024765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.228 [2024-10-01 13:44:05.024801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.228 [2024-10-01 13:44:05.025035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.228 [2024-10-01 13:44:05.025073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.228 [2024-10-01 13:44:05.034039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.228 [2024-10-01 13:44:05.034092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.228 [2024-10-01 13:44:05.034194] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.228 [2024-10-01 13:44:05.034250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.228 [2024-10-01 13:44:05.034271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.228 [2024-10-01 13:44:05.034323] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.228 [2024-10-01 13:44:05.034348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.228 [2024-10-01 13:44:05.034365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.228 [2024-10-01 13:44:05.034399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.228 [2024-10-01 13:44:05.034423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.228 [2024-10-01 13:44:05.034449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.228 [2024-10-01 13:44:05.034467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.228 [2024-10-01 13:44:05.034482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.228 [2024-10-01 13:44:05.034499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.228 [2024-10-01 13:44:05.034514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.229 [2024-10-01 13:44:05.034528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.229 [2024-10-01 13:44:05.034578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.229 [2024-10-01 13:44:05.034599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.229 [2024-10-01 13:44:05.045001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.229 [2024-10-01 13:44:05.045075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.229 [2024-10-01 13:44:05.045191] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.229 [2024-10-01 13:44:05.045225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.229 [2024-10-01 13:44:05.045244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.229 [2024-10-01 13:44:05.045295] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.229 [2024-10-01 13:44:05.045320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.229 [2024-10-01 13:44:05.045337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.229 [2024-10-01 13:44:05.045372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.229 [2024-10-01 13:44:05.045396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.229 [2024-10-01 13:44:05.045452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.229 [2024-10-01 13:44:05.045471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.229 [2024-10-01 13:44:05.045488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.229 [2024-10-01 13:44:05.045505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.229 [2024-10-01 13:44:05.045520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.229 [2024-10-01 13:44:05.045549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.229 [2024-10-01 13:44:05.045587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.229 [2024-10-01 13:44:05.045608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.229 [2024-10-01 13:44:05.055173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.229 [2024-10-01 13:44:05.055295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.229 [2024-10-01 13:44:05.055411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.229 [2024-10-01 13:44:05.055444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.229 [2024-10-01 13:44:05.055464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.229 [2024-10-01 13:44:05.055548] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.229 [2024-10-01 13:44:05.055578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.229 [2024-10-01 13:44:05.055596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.229 [2024-10-01 13:44:05.055618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.229 [2024-10-01 13:44:05.055908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.229 [2024-10-01 13:44:05.055950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.229 [2024-10-01 13:44:05.055968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.229 [2024-10-01 13:44:05.055984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.229 [2024-10-01 13:44:05.056134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.229 [2024-10-01 13:44:05.056162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.229 [2024-10-01 13:44:05.056177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.229 [2024-10-01 13:44:05.056191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.229 [2024-10-01 13:44:05.056307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.229 [2024-10-01 13:44:05.065813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.229 [2024-10-01 13:44:05.065867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.229 [2024-10-01 13:44:05.065968] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.229 [2024-10-01 13:44:05.066000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.229 [2024-10-01 13:44:05.066054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.229 [2024-10-01 13:44:05.066112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.229 [2024-10-01 13:44:05.066138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.229 [2024-10-01 13:44:05.066155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.229 [2024-10-01 13:44:05.067250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.229 [2024-10-01 13:44:05.067297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.229 [2024-10-01 13:44:05.067529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.229 [2024-10-01 13:44:05.067583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.229 [2024-10-01 13:44:05.067601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.229 [2024-10-01 13:44:05.067619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.229 [2024-10-01 13:44:05.067634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.229 [2024-10-01 13:44:05.067647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.229 [2024-10-01 13:44:05.068731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.229 [2024-10-01 13:44:05.068769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.229 [2024-10-01 13:44:05.076584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.229 [2024-10-01 13:44:05.076636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.229 [2024-10-01 13:44:05.076738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.229 [2024-10-01 13:44:05.076777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.229 [2024-10-01 13:44:05.076797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.229 [2024-10-01 13:44:05.076847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.229 [2024-10-01 13:44:05.076872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.229 [2024-10-01 13:44:05.076889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.229 [2024-10-01 13:44:05.076923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.229 [2024-10-01 13:44:05.076946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.229 [2024-10-01 13:44:05.076973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.229 [2024-10-01 13:44:05.076991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.229 [2024-10-01 13:44:05.077006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.229 [2024-10-01 13:44:05.077023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.229 [2024-10-01 13:44:05.077039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.229 [2024-10-01 13:44:05.077052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.229 [2024-10-01 13:44:05.077101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.229 [2024-10-01 13:44:05.077123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.229 [2024-10-01 13:44:05.087435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.229 [2024-10-01 13:44:05.087489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.229 [2024-10-01 13:44:05.087607] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.229 [2024-10-01 13:44:05.087643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.229 [2024-10-01 13:44:05.087662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.229 [2024-10-01 13:44:05.087714] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.229 [2024-10-01 13:44:05.087739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.229 [2024-10-01 13:44:05.087756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.230 [2024-10-01 13:44:05.087789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.230 [2024-10-01 13:44:05.087813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.230 [2024-10-01 13:44:05.087840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.230 [2024-10-01 13:44:05.087858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.230 [2024-10-01 13:44:05.087872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.230 [2024-10-01 13:44:05.087907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.230 [2024-10-01 13:44:05.087923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.230 [2024-10-01 13:44:05.087937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.230 [2024-10-01 13:44:05.087970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.230 [2024-10-01 13:44:05.087991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.230 [2024-10-01 13:44:05.097586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.230 [2024-10-01 13:44:05.097664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.230 [2024-10-01 13:44:05.097749] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.230 [2024-10-01 13:44:05.097802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.230 [2024-10-01 13:44:05.097822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.230 [2024-10-01 13:44:05.098127] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.230 [2024-10-01 13:44:05.098170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.230 [2024-10-01 13:44:05.098190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.230 [2024-10-01 13:44:05.098210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.230 [2024-10-01 13:44:05.098355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.230 [2024-10-01 13:44:05.098389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.230 [2024-10-01 13:44:05.098426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.230 [2024-10-01 13:44:05.098443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.230 [2024-10-01 13:44:05.098571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.230 [2024-10-01 13:44:05.098597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.230 [2024-10-01 13:44:05.098612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.230 [2024-10-01 13:44:05.098626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.230 [2024-10-01 13:44:05.098665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.230 [2024-10-01 13:44:05.108070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.230 [2024-10-01 13:44:05.108121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.230 [2024-10-01 13:44:05.108221] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.230 [2024-10-01 13:44:05.108254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.230 [2024-10-01 13:44:05.108271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.230 [2024-10-01 13:44:05.108320] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.230 [2024-10-01 13:44:05.108344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.230 [2024-10-01 13:44:05.108361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.230 [2024-10-01 13:44:05.108394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.230 [2024-10-01 13:44:05.108417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.230 [2024-10-01 13:44:05.109505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.230 [2024-10-01 13:44:05.109556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.230 [2024-10-01 13:44:05.109576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.230 [2024-10-01 13:44:05.109601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.230 [2024-10-01 13:44:05.109629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.230 [2024-10-01 13:44:05.109644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.230 [2024-10-01 13:44:05.109927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.230 [2024-10-01 13:44:05.109968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.230 [2024-10-01 13:44:05.118848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.230 [2024-10-01 13:44:05.118898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.230 [2024-10-01 13:44:05.118999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.230 [2024-10-01 13:44:05.119031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.230 [2024-10-01 13:44:05.119049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.230 [2024-10-01 13:44:05.119124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.230 [2024-10-01 13:44:05.119151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.230 [2024-10-01 13:44:05.119168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.230 [2024-10-01 13:44:05.119203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.230 [2024-10-01 13:44:05.119226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.230 [2024-10-01 13:44:05.119254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.230 [2024-10-01 13:44:05.119272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.230 [2024-10-01 13:44:05.119286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.230 [2024-10-01 13:44:05.119303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.230 [2024-10-01 13:44:05.119318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.230 [2024-10-01 13:44:05.119332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.230 [2024-10-01 13:44:05.119364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.230 [2024-10-01 13:44:05.119384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.230 [2024-10-01 13:44:05.129711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.230 [2024-10-01 13:44:05.129765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.230 [2024-10-01 13:44:05.129866] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.230 [2024-10-01 13:44:05.129904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.230 [2024-10-01 13:44:05.129924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.230 [2024-10-01 13:44:05.129975] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.230 [2024-10-01 13:44:05.130000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.230 [2024-10-01 13:44:05.130016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.230 [2024-10-01 13:44:05.130050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.230 [2024-10-01 13:44:05.130073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.230 [2024-10-01 13:44:05.130100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.230 [2024-10-01 13:44:05.130118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.230 [2024-10-01 13:44:05.130132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.230 [2024-10-01 13:44:05.130149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.230 [2024-10-01 13:44:05.130164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.230 [2024-10-01 13:44:05.130178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.230 [2024-10-01 13:44:05.130209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.230 [2024-10-01 13:44:05.130243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.230 [2024-10-01 13:44:05.139849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.230 [2024-10-01 13:44:05.139937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.230 [2024-10-01 13:44:05.140026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.230 [2024-10-01 13:44:05.140073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.230 [2024-10-01 13:44:05.140094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.230 [2024-10-01 13:44:05.140164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.230 [2024-10-01 13:44:05.140193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.230 [2024-10-01 13:44:05.140209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.230 [2024-10-01 13:44:05.140229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.230 [2024-10-01 13:44:05.140493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.231 [2024-10-01 13:44:05.140548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.231 [2024-10-01 13:44:05.140570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.231 [2024-10-01 13:44:05.140584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.231 [2024-10-01 13:44:05.140718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.231 [2024-10-01 13:44:05.140742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.231 [2024-10-01 13:44:05.140756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.231 [2024-10-01 13:44:05.140770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.231 [2024-10-01 13:44:05.140878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.231 [2024-10-01 13:44:05.150423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.231 [2024-10-01 13:44:05.150505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.231 [2024-10-01 13:44:05.150643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.231 [2024-10-01 13:44:05.150679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.231 [2024-10-01 13:44:05.150698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.231 [2024-10-01 13:44:05.150750] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.231 [2024-10-01 13:44:05.150774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.231 [2024-10-01 13:44:05.150790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.231 [2024-10-01 13:44:05.151920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.231 [2024-10-01 13:44:05.151967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.231 [2024-10-01 13:44:05.152207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.231 [2024-10-01 13:44:05.152246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.231 [2024-10-01 13:44:05.152298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.231 [2024-10-01 13:44:05.152318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.231 [2024-10-01 13:44:05.152335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.231 [2024-10-01 13:44:05.152348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.231 [2024-10-01 13:44:05.153445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.231 [2024-10-01 13:44:05.153484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.231 [2024-10-01 13:44:05.161460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.231 [2024-10-01 13:44:05.161563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.231 [2024-10-01 13:44:05.161700] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.231 [2024-10-01 13:44:05.161739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.231 [2024-10-01 13:44:05.161759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.231 [2024-10-01 13:44:05.161811] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.231 [2024-10-01 13:44:05.161836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.231 [2024-10-01 13:44:05.161853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.231 [2024-10-01 13:44:05.161889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.231 [2024-10-01 13:44:05.161913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.231 [2024-10-01 13:44:05.161940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.231 [2024-10-01 13:44:05.161957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.231 [2024-10-01 13:44:05.161974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.231 [2024-10-01 13:44:05.161992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.231 [2024-10-01 13:44:05.162007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.231 [2024-10-01 13:44:05.162021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.231 [2024-10-01 13:44:05.162053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.231 [2024-10-01 13:44:05.162073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.231 [2024-10-01 13:44:05.172454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.231 [2024-10-01 13:44:05.172508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.231 [2024-10-01 13:44:05.172623] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.231 [2024-10-01 13:44:05.172657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.231 [2024-10-01 13:44:05.172675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.231 [2024-10-01 13:44:05.172726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.231 [2024-10-01 13:44:05.172751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.231 [2024-10-01 13:44:05.172794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.231 [2024-10-01 13:44:05.172830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.231 [2024-10-01 13:44:05.172853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.231 [2024-10-01 13:44:05.172881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.231 [2024-10-01 13:44:05.172898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.231 [2024-10-01 13:44:05.172913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.231 [2024-10-01 13:44:05.172930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.231 [2024-10-01 13:44:05.172945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.231 [2024-10-01 13:44:05.172959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.231 [2024-10-01 13:44:05.172990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.231 [2024-10-01 13:44:05.173010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.231 [2024-10-01 13:44:05.182608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.231 [2024-10-01 13:44:05.182661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.231 [2024-10-01 13:44:05.182768] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.231 [2024-10-01 13:44:05.182806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.231 [2024-10-01 13:44:05.182826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.231 [2024-10-01 13:44:05.182878] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.231 [2024-10-01 13:44:05.182903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.231 [2024-10-01 13:44:05.182920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.232 [2024-10-01 13:44:05.183183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.232 [2024-10-01 13:44:05.183227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.232 [2024-10-01 13:44:05.183370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.232 [2024-10-01 13:44:05.183406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.232 [2024-10-01 13:44:05.183424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.232 [2024-10-01 13:44:05.183442] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.232 [2024-10-01 13:44:05.183457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.232 [2024-10-01 13:44:05.183470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.232 [2024-10-01 13:44:05.183596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.232 [2024-10-01 13:44:05.183621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.232 [2024-10-01 13:44:05.192970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.232 [2024-10-01 13:44:05.193035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.232 [2024-10-01 13:44:05.193136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.232 [2024-10-01 13:44:05.193168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.232 [2024-10-01 13:44:05.193186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.232 [2024-10-01 13:44:05.193236] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.232 [2024-10-01 13:44:05.193260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.232 [2024-10-01 13:44:05.193277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.232 [2024-10-01 13:44:05.193310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.232 [2024-10-01 13:44:05.193334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.232 [2024-10-01 13:44:05.194419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.232 [2024-10-01 13:44:05.194459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.232 [2024-10-01 13:44:05.194478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.232 [2024-10-01 13:44:05.194497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.232 [2024-10-01 13:44:05.194512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.232 [2024-10-01 13:44:05.194526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.232 [2024-10-01 13:44:05.194759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.232 [2024-10-01 13:44:05.194787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.232 [2024-10-01 13:44:05.203747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.232 [2024-10-01 13:44:05.203796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.232 [2024-10-01 13:44:05.203903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.232 [2024-10-01 13:44:05.203935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.232 [2024-10-01 13:44:05.203953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.232 [2024-10-01 13:44:05.204004] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.232 [2024-10-01 13:44:05.204029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.232 [2024-10-01 13:44:05.204045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.232 [2024-10-01 13:44:05.204078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.232 [2024-10-01 13:44:05.204101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.232 [2024-10-01 13:44:05.204127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.232 [2024-10-01 13:44:05.204145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.232 [2024-10-01 13:44:05.204159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.232 [2024-10-01 13:44:05.204193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.232 [2024-10-01 13:44:05.204211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.232 [2024-10-01 13:44:05.204224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.232 [2024-10-01 13:44:05.204257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.232 [2024-10-01 13:44:05.204276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.232 [2024-10-01 13:44:05.214657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.232 [2024-10-01 13:44:05.214710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.232 [2024-10-01 13:44:05.214808] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.232 [2024-10-01 13:44:05.214841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.232 [2024-10-01 13:44:05.214859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.232 [2024-10-01 13:44:05.214908] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.232 [2024-10-01 13:44:05.214942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.232 [2024-10-01 13:44:05.214958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.232 [2024-10-01 13:44:05.214991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.232 [2024-10-01 13:44:05.215014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.232 [2024-10-01 13:44:05.215041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.232 [2024-10-01 13:44:05.215058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.232 [2024-10-01 13:44:05.215072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.232 [2024-10-01 13:44:05.215089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.232 [2024-10-01 13:44:05.215104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.232 [2024-10-01 13:44:05.215118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.232 [2024-10-01 13:44:05.215150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.232 [2024-10-01 13:44:05.215170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.232 [2024-10-01 13:44:05.224794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.232 [2024-10-01 13:44:05.224845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.232 [2024-10-01 13:44:05.224942] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.232 [2024-10-01 13:44:05.224974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.232 [2024-10-01 13:44:05.224992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.232 [2024-10-01 13:44:05.225041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.232 [2024-10-01 13:44:05.225066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.232 [2024-10-01 13:44:05.225083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.232 [2024-10-01 13:44:05.225366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.232 [2024-10-01 13:44:05.225411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.232 [2024-10-01 13:44:05.225571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.232 [2024-10-01 13:44:05.225607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.232 [2024-10-01 13:44:05.225625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.232 [2024-10-01 13:44:05.225643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.232 [2024-10-01 13:44:05.225658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.232 [2024-10-01 13:44:05.225671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.232 [2024-10-01 13:44:05.225783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.232 [2024-10-01 13:44:05.225806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.232 [2024-10-01 13:44:05.235190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.232 [2024-10-01 13:44:05.235240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.232 [2024-10-01 13:44:05.235338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.232 [2024-10-01 13:44:05.235378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.232 [2024-10-01 13:44:05.235396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.232 [2024-10-01 13:44:05.235446] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.232 [2024-10-01 13:44:05.235471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.232 [2024-10-01 13:44:05.235487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.232 [2024-10-01 13:44:05.235520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.233 [2024-10-01 13:44:05.235559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.233 [2024-10-01 13:44:05.236670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.233 [2024-10-01 13:44:05.236710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.233 [2024-10-01 13:44:05.236729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.233 [2024-10-01 13:44:05.236746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.233 [2024-10-01 13:44:05.236761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.233 [2024-10-01 13:44:05.236775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.233 [2024-10-01 13:44:05.237005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.233 [2024-10-01 13:44:05.237033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.233 [2024-10-01 13:44:05.246047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.233 [2024-10-01 13:44:05.246096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.233 [2024-10-01 13:44:05.246215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.233 [2024-10-01 13:44:05.246254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.233 [2024-10-01 13:44:05.246273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.233 [2024-10-01 13:44:05.246324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.233 [2024-10-01 13:44:05.246348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.233 [2024-10-01 13:44:05.246364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.233 [2024-10-01 13:44:05.246397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.233 [2024-10-01 13:44:05.246421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.233 [2024-10-01 13:44:05.246448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.233 [2024-10-01 13:44:05.246465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.233 [2024-10-01 13:44:05.246480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.233 [2024-10-01 13:44:05.246497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.233 [2024-10-01 13:44:05.246512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.233 [2024-10-01 13:44:05.246526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.233 [2024-10-01 13:44:05.246574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.233 [2024-10-01 13:44:05.246595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.233 [2024-10-01 13:44:05.256954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.233 [2024-10-01 13:44:05.257005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.233 [2024-10-01 13:44:05.257103] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.233 [2024-10-01 13:44:05.257135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.233 [2024-10-01 13:44:05.257153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.233 [2024-10-01 13:44:05.257203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.233 [2024-10-01 13:44:05.257228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.233 [2024-10-01 13:44:05.257244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.233 [2024-10-01 13:44:05.257277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.233 [2024-10-01 13:44:05.257301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.233 [2024-10-01 13:44:05.257327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.233 [2024-10-01 13:44:05.257345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.233 [2024-10-01 13:44:05.257359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.233 [2024-10-01 13:44:05.257375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.233 [2024-10-01 13:44:05.257407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.233 [2024-10-01 13:44:05.257423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.233 [2024-10-01 13:44:05.257456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.233 [2024-10-01 13:44:05.257476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.233 [2024-10-01 13:44:05.267086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.233 [2024-10-01 13:44:05.267164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.233 [2024-10-01 13:44:05.267249] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.233 [2024-10-01 13:44:05.267295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.233 [2024-10-01 13:44:05.267316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.233 [2024-10-01 13:44:05.267633] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.233 [2024-10-01 13:44:05.267676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.233 [2024-10-01 13:44:05.267696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.233 [2024-10-01 13:44:05.267716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.233 [2024-10-01 13:44:05.267850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.233 [2024-10-01 13:44:05.267876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.233 [2024-10-01 13:44:05.267904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.233 [2024-10-01 13:44:05.267918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.233 [2024-10-01 13:44:05.268029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.233 [2024-10-01 13:44:05.268052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.233 [2024-10-01 13:44:05.268066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.233 [2024-10-01 13:44:05.268081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.233 [2024-10-01 13:44:05.268119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.233 [2024-10-01 13:44:05.277562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.233 [2024-10-01 13:44:05.277611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.233 [2024-10-01 13:44:05.277709] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.233 [2024-10-01 13:44:05.277741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.233 [2024-10-01 13:44:05.277759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.233 [2024-10-01 13:44:05.277809] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.233 [2024-10-01 13:44:05.277834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.233 [2024-10-01 13:44:05.277850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.233 [2024-10-01 13:44:05.277883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.233 [2024-10-01 13:44:05.277926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.233 [2024-10-01 13:44:05.279012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.233 [2024-10-01 13:44:05.279053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.233 [2024-10-01 13:44:05.279072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.233 [2024-10-01 13:44:05.279090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.233 [2024-10-01 13:44:05.279105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.233 [2024-10-01 13:44:05.279119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.233 [2024-10-01 13:44:05.279358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.233 [2024-10-01 13:44:05.279387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.233 [2024-10-01 13:44:05.288394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.233 [2024-10-01 13:44:05.288444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.233 [2024-10-01 13:44:05.288555] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.233 [2024-10-01 13:44:05.288588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.233 [2024-10-01 13:44:05.288606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.233 [2024-10-01 13:44:05.288658] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.233 [2024-10-01 13:44:05.288683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.233 [2024-10-01 13:44:05.288700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.233 [2024-10-01 13:44:05.288733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.233 [2024-10-01 13:44:05.288757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.233 [2024-10-01 13:44:05.288784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.233 [2024-10-01 13:44:05.288801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.234 [2024-10-01 13:44:05.288815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.234 [2024-10-01 13:44:05.288832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.234 [2024-10-01 13:44:05.288847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.234 [2024-10-01 13:44:05.288860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.234 [2024-10-01 13:44:05.288892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.234 [2024-10-01 13:44:05.288911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.234 [2024-10-01 13:44:05.299283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.234 [2024-10-01 13:44:05.299334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.234 [2024-10-01 13:44:05.299432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.234 [2024-10-01 13:44:05.299500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.234 [2024-10-01 13:44:05.299523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.234 [2024-10-01 13:44:05.299593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.234 [2024-10-01 13:44:05.299621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.234 [2024-10-01 13:44:05.299637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.234 [2024-10-01 13:44:05.299672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.234 [2024-10-01 13:44:05.299696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.234 [2024-10-01 13:44:05.299723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.234 [2024-10-01 13:44:05.299741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.234 [2024-10-01 13:44:05.299755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.234 [2024-10-01 13:44:05.299772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.234 [2024-10-01 13:44:05.299787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.234 [2024-10-01 13:44:05.299801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.234 [2024-10-01 13:44:05.299833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.234 [2024-10-01 13:44:05.299852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.234 [2024-10-01 13:44:05.309412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.234 [2024-10-01 13:44:05.309487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.234 [2024-10-01 13:44:05.309587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.234 [2024-10-01 13:44:05.309619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.234 [2024-10-01 13:44:05.309638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.234 [2024-10-01 13:44:05.309708] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.234 [2024-10-01 13:44:05.309736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.234 [2024-10-01 13:44:05.309753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.234 [2024-10-01 13:44:05.309772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.234 [2024-10-01 13:44:05.310041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.234 [2024-10-01 13:44:05.310093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.234 [2024-10-01 13:44:05.310111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.234 [2024-10-01 13:44:05.310125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.234 [2024-10-01 13:44:05.310259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.234 [2024-10-01 13:44:05.310282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.234 [2024-10-01 13:44:05.310312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.234 [2024-10-01 13:44:05.310327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.234 [2024-10-01 13:44:05.310437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.234 [2024-10-01 13:44:05.320010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.234 [2024-10-01 13:44:05.320064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.234 [2024-10-01 13:44:05.320164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.234 [2024-10-01 13:44:05.320206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.234 [2024-10-01 13:44:05.320225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.234 [2024-10-01 13:44:05.320275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.234 [2024-10-01 13:44:05.320300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.234 [2024-10-01 13:44:05.320317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.234 [2024-10-01 13:44:05.320350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.234 [2024-10-01 13:44:05.320373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.234 [2024-10-01 13:44:05.321464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.234 [2024-10-01 13:44:05.321503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.234 [2024-10-01 13:44:05.321522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.234 [2024-10-01 13:44:05.321552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.234 [2024-10-01 13:44:05.321572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.234 [2024-10-01 13:44:05.321586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.234 [2024-10-01 13:44:05.321823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.234 [2024-10-01 13:44:05.321851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.234 [2024-10-01 13:44:05.330866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.234 [2024-10-01 13:44:05.330916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.234 [2024-10-01 13:44:05.331014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.234 [2024-10-01 13:44:05.331052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.234 [2024-10-01 13:44:05.331070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.234 [2024-10-01 13:44:05.331119] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.234 [2024-10-01 13:44:05.331144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.234 [2024-10-01 13:44:05.331161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.234 [2024-10-01 13:44:05.331194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.234 [2024-10-01 13:44:05.331217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.234 [2024-10-01 13:44:05.331269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.234 [2024-10-01 13:44:05.331288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.234 [2024-10-01 13:44:05.331302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.234 [2024-10-01 13:44:05.331320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.234 [2024-10-01 13:44:05.331335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.234 [2024-10-01 13:44:05.331348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.234 [2024-10-01 13:44:05.331380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.235 [2024-10-01 13:44:05.331398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.235 [2024-10-01 13:44:05.341785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.235 [2024-10-01 13:44:05.341836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.235 [2024-10-01 13:44:05.341935] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.235 [2024-10-01 13:44:05.341968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.235 [2024-10-01 13:44:05.341986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.235 [2024-10-01 13:44:05.342036] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.235 [2024-10-01 13:44:05.342060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.235 [2024-10-01 13:44:05.342077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.235 [2024-10-01 13:44:05.342110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.235 [2024-10-01 13:44:05.342133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.235 [2024-10-01 13:44:05.342160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.235 [2024-10-01 13:44:05.342178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.235 [2024-10-01 13:44:05.342192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.235 [2024-10-01 13:44:05.342208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.235 [2024-10-01 13:44:05.342223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.235 [2024-10-01 13:44:05.342236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.235 [2024-10-01 13:44:05.342269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.235 [2024-10-01 13:44:05.342288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.235 [2024-10-01 13:44:05.351926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.235 [2024-10-01 13:44:05.352001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.235 [2024-10-01 13:44:05.352084] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.235 [2024-10-01 13:44:05.352138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.235 [2024-10-01 13:44:05.352176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.235 [2024-10-01 13:44:05.352250] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.235 [2024-10-01 13:44:05.352278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.235 [2024-10-01 13:44:05.352295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.235 [2024-10-01 13:44:05.352314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.235 [2024-10-01 13:44:05.352593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.235 [2024-10-01 13:44:05.352634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.235 [2024-10-01 13:44:05.352652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.235 [2024-10-01 13:44:05.352666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.235 [2024-10-01 13:44:05.352814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.235 [2024-10-01 13:44:05.352840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.235 [2024-10-01 13:44:05.352855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.235 [2024-10-01 13:44:05.352869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.235 [2024-10-01 13:44:05.352978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.235 [2024-10-01 13:44:05.362421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.235 [2024-10-01 13:44:05.362470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.235 [2024-10-01 13:44:05.362582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.235 [2024-10-01 13:44:05.362614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.235 [2024-10-01 13:44:05.362632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.235 [2024-10-01 13:44:05.362682] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.235 [2024-10-01 13:44:05.362706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.235 [2024-10-01 13:44:05.362723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.235 [2024-10-01 13:44:05.362756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.235 [2024-10-01 13:44:05.362779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.235 [2024-10-01 13:44:05.363861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.235 [2024-10-01 13:44:05.363910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.235 [2024-10-01 13:44:05.363928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.235 [2024-10-01 13:44:05.363946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.235 [2024-10-01 13:44:05.363961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.235 [2024-10-01 13:44:05.363974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.235 [2024-10-01 13:44:05.364210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.235 [2024-10-01 13:44:05.364245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.235 [2024-10-01 13:44:05.373211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.235 [2024-10-01 13:44:05.373260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.235 [2024-10-01 13:44:05.373358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.235 [2024-10-01 13:44:05.373389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.235 [2024-10-01 13:44:05.373407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.235 [2024-10-01 13:44:05.373456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.235 [2024-10-01 13:44:05.373481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.235 [2024-10-01 13:44:05.373497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.235 [2024-10-01 13:44:05.373530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.235 [2024-10-01 13:44:05.373573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.235 [2024-10-01 13:44:05.373603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.235 [2024-10-01 13:44:05.373620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.235 [2024-10-01 13:44:05.373635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.235 [2024-10-01 13:44:05.373652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.235 [2024-10-01 13:44:05.373667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.235 [2024-10-01 13:44:05.373681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.235 [2024-10-01 13:44:05.373712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.235 [2024-10-01 13:44:05.373731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.235 [2024-10-01 13:44:05.384130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.235 [2024-10-01 13:44:05.384180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.235 [2024-10-01 13:44:05.384277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.235 [2024-10-01 13:44:05.384308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.235 [2024-10-01 13:44:05.384326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.235 [2024-10-01 13:44:05.384375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.235 [2024-10-01 13:44:05.384400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.235 [2024-10-01 13:44:05.384416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.235 [2024-10-01 13:44:05.384448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.235 [2024-10-01 13:44:05.384471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.235 [2024-10-01 13:44:05.384498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.235 [2024-10-01 13:44:05.384548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.235 [2024-10-01 13:44:05.384567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.235 [2024-10-01 13:44:05.384584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.235 [2024-10-01 13:44:05.384600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.235 [2024-10-01 13:44:05.384613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.235 [2024-10-01 13:44:05.384646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.235 [2024-10-01 13:44:05.384666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.235 [2024-10-01 13:44:05.394261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.236 [2024-10-01 13:44:05.394345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.236 [2024-10-01 13:44:05.394433] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.236 [2024-10-01 13:44:05.394470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.236 [2024-10-01 13:44:05.394490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.236 [2024-10-01 13:44:05.394574] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.236 [2024-10-01 13:44:05.394604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.236 [2024-10-01 13:44:05.394621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.236 [2024-10-01 13:44:05.394642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.236 [2024-10-01 13:44:05.394908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.236 [2024-10-01 13:44:05.394949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.236 [2024-10-01 13:44:05.394968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.236 [2024-10-01 13:44:05.394983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.236 [2024-10-01 13:44:05.395129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.236 [2024-10-01 13:44:05.395156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.236 [2024-10-01 13:44:05.395171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.236 [2024-10-01 13:44:05.395186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.236 [2024-10-01 13:44:05.395296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.236 [2024-10-01 13:44:05.405178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.236 [2024-10-01 13:44:05.405267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.236 [2024-10-01 13:44:05.405401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.236 [2024-10-01 13:44:05.405436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.236 [2024-10-01 13:44:05.405455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.236 [2024-10-01 13:44:05.405560] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.236 [2024-10-01 13:44:05.405589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.236 [2024-10-01 13:44:05.405606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.236 [2024-10-01 13:44:05.405643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.236 [2024-10-01 13:44:05.405668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.236 [2024-10-01 13:44:05.405714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.236 [2024-10-01 13:44:05.405737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.236 [2024-10-01 13:44:05.405754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.236 [2024-10-01 13:44:05.405771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.236 [2024-10-01 13:44:05.405787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.236 [2024-10-01 13:44:05.405800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.236 [2024-10-01 13:44:05.405833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.236 [2024-10-01 13:44:05.405853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.236 [2024-10-01 13:44:05.415352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.236 [2024-10-01 13:44:05.415434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.236 [2024-10-01 13:44:05.415518] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.236 [2024-10-01 13:44:05.415562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.236 [2024-10-01 13:44:05.415582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.236 [2024-10-01 13:44:05.415651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.236 [2024-10-01 13:44:05.415679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.236 [2024-10-01 13:44:05.415695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.236 [2024-10-01 13:44:05.415714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.236 [2024-10-01 13:44:05.415747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.236 [2024-10-01 13:44:05.415768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.236 [2024-10-01 13:44:05.415792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.236 [2024-10-01 13:44:05.415807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.236 [2024-10-01 13:44:05.415839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.236 [2024-10-01 13:44:05.415859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.236 [2024-10-01 13:44:05.415873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.236 [2024-10-01 13:44:05.415901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.236 [2024-10-01 13:44:05.416854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.236 [2024-10-01 13:44:05.425454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.236 [2024-10-01 13:44:05.425584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.236 [2024-10-01 13:44:05.425629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.236 [2024-10-01 13:44:05.425650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.236 [2024-10-01 13:44:05.425699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.236 [2024-10-01 13:44:05.425741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.236 [2024-10-01 13:44:05.425774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.236 [2024-10-01 13:44:05.425791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.236 [2024-10-01 13:44:05.425805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.236 [2024-10-01 13:44:05.425836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.236 [2024-10-01 13:44:05.425897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.236 [2024-10-01 13:44:05.425924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.236 [2024-10-01 13:44:05.425940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.236 [2024-10-01 13:44:05.427272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.236 [2024-10-01 13:44:05.428254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.236 [2024-10-01 13:44:05.428294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.236 [2024-10-01 13:44:05.428312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.236 [2024-10-01 13:44:05.428428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.236 [2024-10-01 13:44:05.436808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.236 [2024-10-01 13:44:05.436857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.236 [2024-10-01 13:44:05.436956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.236 [2024-10-01 13:44:05.436993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.236 [2024-10-01 13:44:05.437013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.236 [2024-10-01 13:44:05.437063] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.236 [2024-10-01 13:44:05.437088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.236 [2024-10-01 13:44:05.437104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.236 [2024-10-01 13:44:05.438173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.236 [2024-10-01 13:44:05.438217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.236 [2024-10-01 13:44:05.438839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.236 [2024-10-01 13:44:05.438877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.236 [2024-10-01 13:44:05.438915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.236 [2024-10-01 13:44:05.438935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.236 [2024-10-01 13:44:05.438950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.236 [2024-10-01 13:44:05.438964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.236 [2024-10-01 13:44:05.439040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.236 [2024-10-01 13:44:05.439063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.236 [2024-10-01 13:44:05.446938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.236 [2024-10-01 13:44:05.447013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.236 [2024-10-01 13:44:05.447095] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.236 [2024-10-01 13:44:05.447126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.237 [2024-10-01 13:44:05.447144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.237 [2024-10-01 13:44:05.447211] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.237 [2024-10-01 13:44:05.447238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.237 [2024-10-01 13:44:05.447255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.237 [2024-10-01 13:44:05.447274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.237 [2024-10-01 13:44:05.447307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.237 [2024-10-01 13:44:05.447328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.237 [2024-10-01 13:44:05.447342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.237 [2024-10-01 13:44:05.447356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.237 [2024-10-01 13:44:05.447387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.237 [2024-10-01 13:44:05.447407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.237 [2024-10-01 13:44:05.447421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.237 [2024-10-01 13:44:05.447435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.237 [2024-10-01 13:44:05.448662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.237 [2024-10-01 13:44:05.457633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.237 [2024-10-01 13:44:05.457683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.237 [2024-10-01 13:44:05.457783] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.237 [2024-10-01 13:44:05.457816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.237 [2024-10-01 13:44:05.457834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.237 [2024-10-01 13:44:05.457884] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.237 [2024-10-01 13:44:05.457930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.237 [2024-10-01 13:44:05.457949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.237 [2024-10-01 13:44:05.457983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.237 [2024-10-01 13:44:05.458007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.237 [2024-10-01 13:44:05.458034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.237 [2024-10-01 13:44:05.458052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.237 [2024-10-01 13:44:05.458066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.237 [2024-10-01 13:44:05.458083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.237 [2024-10-01 13:44:05.458098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.237 [2024-10-01 13:44:05.458111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.237 [2024-10-01 13:44:05.458144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.237 [2024-10-01 13:44:05.458164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.237 [2024-10-01 13:44:05.468120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.237 [2024-10-01 13:44:05.468177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.237 [2024-10-01 13:44:05.468280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.237 [2024-10-01 13:44:05.468312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.237 [2024-10-01 13:44:05.468330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.237 [2024-10-01 13:44:05.468380] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.237 [2024-10-01 13:44:05.468406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.237 [2024-10-01 13:44:05.468422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.237 [2024-10-01 13:44:05.468455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.237 [2024-10-01 13:44:05.468479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.237 [2024-10-01 13:44:05.468505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.237 [2024-10-01 13:44:05.468523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.237 [2024-10-01 13:44:05.468552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.237 [2024-10-01 13:44:05.468572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.237 [2024-10-01 13:44:05.468589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.237 [2024-10-01 13:44:05.468602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.237 [2024-10-01 13:44:05.468864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.237 [2024-10-01 13:44:05.468891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.237 [2024-10-01 13:44:05.478950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.237 [2024-10-01 13:44:05.479064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.237 [2024-10-01 13:44:05.479196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.237 [2024-10-01 13:44:05.479231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.237 [2024-10-01 13:44:05.479250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.237 [2024-10-01 13:44:05.479301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.237 [2024-10-01 13:44:05.479326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.237 [2024-10-01 13:44:05.479343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.237 [2024-10-01 13:44:05.480458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.237 [2024-10-01 13:44:05.480504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.237 [2024-10-01 13:44:05.480752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.237 [2024-10-01 13:44:05.480790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.237 [2024-10-01 13:44:05.480808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.237 [2024-10-01 13:44:05.480827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.237 [2024-10-01 13:44:05.480842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.237 [2024-10-01 13:44:05.480856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.237 [2024-10-01 13:44:05.481937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.237 [2024-10-01 13:44:05.481975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.237 [2024-10-01 13:44:05.489811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.237 [2024-10-01 13:44:05.489861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.237 [2024-10-01 13:44:05.489960] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.237 [2024-10-01 13:44:05.489998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.237 [2024-10-01 13:44:05.490016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.237 [2024-10-01 13:44:05.490066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.237 [2024-10-01 13:44:05.490090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.237 [2024-10-01 13:44:05.490107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.237 [2024-10-01 13:44:05.490139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.237 [2024-10-01 13:44:05.490162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.237 [2024-10-01 13:44:05.490188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.237 [2024-10-01 13:44:05.490207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.237 [2024-10-01 13:44:05.490221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.237 [2024-10-01 13:44:05.490254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.237 [2024-10-01 13:44:05.490277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.237 [2024-10-01 13:44:05.490291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.237 [2024-10-01 13:44:05.490323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.237 [2024-10-01 13:44:05.490343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.237 [2024-10-01 13:44:05.500733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.237 [2024-10-01 13:44:05.500784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.237 [2024-10-01 13:44:05.500881] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.237 [2024-10-01 13:44:05.500913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.237 [2024-10-01 13:44:05.500930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.237 [2024-10-01 13:44:05.500980] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.238 [2024-10-01 13:44:05.501006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.238 [2024-10-01 13:44:05.501022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.238 [2024-10-01 13:44:05.501055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.238 [2024-10-01 13:44:05.501079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.238 [2024-10-01 13:44:05.501105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.238 [2024-10-01 13:44:05.501123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.238 [2024-10-01 13:44:05.501138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.238 [2024-10-01 13:44:05.501155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.238 [2024-10-01 13:44:05.501170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.238 [2024-10-01 13:44:05.501184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.238 [2024-10-01 13:44:05.501215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.238 [2024-10-01 13:44:05.501235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.238 [2024-10-01 13:44:05.510874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.238 [2024-10-01 13:44:05.510986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.238 [2024-10-01 13:44:05.511095] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.238 [2024-10-01 13:44:05.511129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.238 [2024-10-01 13:44:05.511147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.238 [2024-10-01 13:44:05.511215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.238 [2024-10-01 13:44:05.511243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.238 [2024-10-01 13:44:05.511336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.238 [2024-10-01 13:44:05.511361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.238 [2024-10-01 13:44:05.511653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.238 [2024-10-01 13:44:05.511695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.238 [2024-10-01 13:44:05.511713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.238 [2024-10-01 13:44:05.511728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.238 [2024-10-01 13:44:05.511887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.238 [2024-10-01 13:44:05.511915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.238 [2024-10-01 13:44:05.511930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.238 [2024-10-01 13:44:05.511945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.238 [2024-10-01 13:44:05.512057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.238 [2024-10-01 13:44:05.521480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.238 [2024-10-01 13:44:05.521532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.238 [2024-10-01 13:44:05.521651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.238 [2024-10-01 13:44:05.521684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.238 [2024-10-01 13:44:05.521703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.238 [2024-10-01 13:44:05.521753] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.238 [2024-10-01 13:44:05.521778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.238 [2024-10-01 13:44:05.521795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.238 [2024-10-01 13:44:05.521828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.238 [2024-10-01 13:44:05.521851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.238 [2024-10-01 13:44:05.522944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.238 [2024-10-01 13:44:05.522984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.238 [2024-10-01 13:44:05.523003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.238 [2024-10-01 13:44:05.523021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.238 [2024-10-01 13:44:05.523037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.238 [2024-10-01 13:44:05.523050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.238 [2024-10-01 13:44:05.523297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.238 [2024-10-01 13:44:05.523327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.238 [2024-10-01 13:44:05.532530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.238 [2024-10-01 13:44:05.532636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.238 [2024-10-01 13:44:05.532808] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.238 [2024-10-01 13:44:05.532849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.238 [2024-10-01 13:44:05.532871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.238 [2024-10-01 13:44:05.532922] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.238 [2024-10-01 13:44:05.532948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.238 [2024-10-01 13:44:05.532964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.238 [2024-10-01 13:44:05.533000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.238 [2024-10-01 13:44:05.533024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.238 [2024-10-01 13:44:05.533052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.238 [2024-10-01 13:44:05.533070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.238 [2024-10-01 13:44:05.533086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.238 [2024-10-01 13:44:05.533103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.238 [2024-10-01 13:44:05.533118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.238 [2024-10-01 13:44:05.533131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.238 [2024-10-01 13:44:05.533164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.238 [2024-10-01 13:44:05.533184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.238 [2024-10-01 13:44:05.543432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.238 [2024-10-01 13:44:05.543483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.238 [2024-10-01 13:44:05.543597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.238 [2024-10-01 13:44:05.543631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.238 [2024-10-01 13:44:05.543648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.238 [2024-10-01 13:44:05.543699] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.238 [2024-10-01 13:44:05.543724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.238 [2024-10-01 13:44:05.543740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.238 [2024-10-01 13:44:05.543774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.238 [2024-10-01 13:44:05.543798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.238 [2024-10-01 13:44:05.543825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.238 [2024-10-01 13:44:05.543842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.238 [2024-10-01 13:44:05.543856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.238 [2024-10-01 13:44:05.543873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.238 [2024-10-01 13:44:05.543919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.238 [2024-10-01 13:44:05.543935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.238 [2024-10-01 13:44:05.543969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.238 [2024-10-01 13:44:05.543989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.238 [2024-10-01 13:44:05.553586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.238 [2024-10-01 13:44:05.553667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.238 [2024-10-01 13:44:05.553754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.238 [2024-10-01 13:44:05.553785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.238 [2024-10-01 13:44:05.553803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.238 [2024-10-01 13:44:05.553871] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.238 [2024-10-01 13:44:05.553899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.238 [2024-10-01 13:44:05.553916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.239 [2024-10-01 13:44:05.553935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.239 [2024-10-01 13:44:05.553968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.239 [2024-10-01 13:44:05.553989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.239 [2024-10-01 13:44:05.554003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.239 [2024-10-01 13:44:05.554017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.239 [2024-10-01 13:44:05.554279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.239 [2024-10-01 13:44:05.554308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.239 [2024-10-01 13:44:05.554324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.239 [2024-10-01 13:44:05.554338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.239 [2024-10-01 13:44:05.554482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.239 [2024-10-01 13:44:05.564335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.239 [2024-10-01 13:44:05.564385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.239 [2024-10-01 13:44:05.564486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.239 [2024-10-01 13:44:05.564519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.239 [2024-10-01 13:44:05.564552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.239 [2024-10-01 13:44:05.564608] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.239 [2024-10-01 13:44:05.564634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.239 [2024-10-01 13:44:05.564650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.239 [2024-10-01 13:44:05.565796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.239 [2024-10-01 13:44:05.565844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.239 [2024-10-01 13:44:05.566081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.239 [2024-10-01 13:44:05.566119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.239 [2024-10-01 13:44:05.566137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.239 [2024-10-01 13:44:05.566155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.239 [2024-10-01 13:44:05.566171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.239 [2024-10-01 13:44:05.566184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.239 [2024-10-01 13:44:05.567265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.239 [2024-10-01 13:44:05.567303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.239 [2024-10-01 13:44:05.575395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.239 [2024-10-01 13:44:05.575444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.239 [2024-10-01 13:44:05.575556] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.239 [2024-10-01 13:44:05.575588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.239 [2024-10-01 13:44:05.575606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.239 [2024-10-01 13:44:05.575657] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.239 [2024-10-01 13:44:05.575682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.239 [2024-10-01 13:44:05.575698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.239 [2024-10-01 13:44:05.575732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.239 [2024-10-01 13:44:05.575761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.239 [2024-10-01 13:44:05.575788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.239 [2024-10-01 13:44:05.575806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.239 [2024-10-01 13:44:05.575821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.239 [2024-10-01 13:44:05.575838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.239 [2024-10-01 13:44:05.575853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.239 [2024-10-01 13:44:05.575867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.239 [2024-10-01 13:44:05.575910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.239 [2024-10-01 13:44:05.575931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.239 [2024-10-01 13:44:05.586446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.239 [2024-10-01 13:44:05.586511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.239 [2024-10-01 13:44:05.586632] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.239 [2024-10-01 13:44:05.586691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.239 [2024-10-01 13:44:05.586713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.239 [2024-10-01 13:44:05.586766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.239 [2024-10-01 13:44:05.586791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.239 [2024-10-01 13:44:05.586808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.239 [2024-10-01 13:44:05.586843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.239 [2024-10-01 13:44:05.586867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.239 [2024-10-01 13:44:05.586894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.239 [2024-10-01 13:44:05.586912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.239 [2024-10-01 13:44:05.586927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.239 [2024-10-01 13:44:05.586944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.239 [2024-10-01 13:44:05.586959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.239 [2024-10-01 13:44:05.586973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.239 [2024-10-01 13:44:05.587006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.239 [2024-10-01 13:44:05.587026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.239 [2024-10-01 13:44:05.596852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.239 [2024-10-01 13:44:05.596927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.239 [2024-10-01 13:44:05.597045] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.239 [2024-10-01 13:44:05.597079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.239 [2024-10-01 13:44:05.597097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.239 [2024-10-01 13:44:05.597148] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.239 [2024-10-01 13:44:05.597172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.239 [2024-10-01 13:44:05.597189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.240 [2024-10-01 13:44:05.597223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.240 [2024-10-01 13:44:05.597247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.240 [2024-10-01 13:44:05.597274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.240 [2024-10-01 13:44:05.597292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.240 [2024-10-01 13:44:05.597307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.240 [2024-10-01 13:44:05.597324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.240 [2024-10-01 13:44:05.597339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.240 [2024-10-01 13:44:05.597375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.240 [2024-10-01 13:44:05.597661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.240 [2024-10-01 13:44:05.597690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.240 [2024-10-01 13:44:05.607553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.240 [2024-10-01 13:44:05.607602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.240 [2024-10-01 13:44:05.607701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.240 [2024-10-01 13:44:05.607733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.240 [2024-10-01 13:44:05.607751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.240 [2024-10-01 13:44:05.607801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.240 [2024-10-01 13:44:05.607826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.240 [2024-10-01 13:44:05.607849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.240 [2024-10-01 13:44:05.607893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.240 [2024-10-01 13:44:05.607918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.240 [2024-10-01 13:44:05.609019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.240 [2024-10-01 13:44:05.609061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.240 [2024-10-01 13:44:05.609079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.240 [2024-10-01 13:44:05.609097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.240 [2024-10-01 13:44:05.609112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.240 [2024-10-01 13:44:05.609126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.240 [2024-10-01 13:44:05.609348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.240 [2024-10-01 13:44:05.609375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.240 [2024-10-01 13:44:05.618394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.240 [2024-10-01 13:44:05.618444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.240 [2024-10-01 13:44:05.618591] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.240 [2024-10-01 13:44:05.618629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.240 [2024-10-01 13:44:05.618649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.240 [2024-10-01 13:44:05.618704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.240 [2024-10-01 13:44:05.618730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.240 [2024-10-01 13:44:05.618747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.240 [2024-10-01 13:44:05.618782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.240 [2024-10-01 13:44:05.618824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.240 [2024-10-01 13:44:05.618855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.240 [2024-10-01 13:44:05.618873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.240 [2024-10-01 13:44:05.618887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.240 [2024-10-01 13:44:05.618905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.240 [2024-10-01 13:44:05.618920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.240 [2024-10-01 13:44:05.618934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.240 [2024-10-01 13:44:05.618965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.240 [2024-10-01 13:44:05.618984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.240 [2024-10-01 13:44:05.629310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.240 [2024-10-01 13:44:05.629372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.240 [2024-10-01 13:44:05.629476] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.240 [2024-10-01 13:44:05.629509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.240 [2024-10-01 13:44:05.629527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.240 [2024-10-01 13:44:05.629597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.240 [2024-10-01 13:44:05.629623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.240 [2024-10-01 13:44:05.629640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.240 [2024-10-01 13:44:05.629674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.240 [2024-10-01 13:44:05.629697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.240 [2024-10-01 13:44:05.629724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.240 [2024-10-01 13:44:05.629742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.240 [2024-10-01 13:44:05.629757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.240 [2024-10-01 13:44:05.629774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.240 [2024-10-01 13:44:05.629789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.241 [2024-10-01 13:44:05.629802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.241 [2024-10-01 13:44:05.629835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.241 [2024-10-01 13:44:05.629855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.241 [2024-10-01 13:44:05.639649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.241 [2024-10-01 13:44:05.639742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.241 [2024-10-01 13:44:05.639889] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.241 [2024-10-01 13:44:05.639926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.241 [2024-10-01 13:44:05.639977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.241 [2024-10-01 13:44:05.640033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.241 [2024-10-01 13:44:05.640059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.241 [2024-10-01 13:44:05.640076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.241 [2024-10-01 13:44:05.640356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.241 [2024-10-01 13:44:05.640401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.241 [2024-10-01 13:44:05.640565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.241 [2024-10-01 13:44:05.640601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.241 [2024-10-01 13:44:05.640619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.241 [2024-10-01 13:44:05.640638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.241 [2024-10-01 13:44:05.640653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.241 [2024-10-01 13:44:05.640667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.241 [2024-10-01 13:44:05.640801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.241 [2024-10-01 13:44:05.640827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.241 [2024-10-01 13:44:05.650444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.241 [2024-10-01 13:44:05.650512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.241 [2024-10-01 13:44:05.650647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.241 [2024-10-01 13:44:05.650682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.241 [2024-10-01 13:44:05.650701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.241 [2024-10-01 13:44:05.650750] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.241 [2024-10-01 13:44:05.650775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.241 [2024-10-01 13:44:05.650791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.241 [2024-10-01 13:44:05.651901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.241 [2024-10-01 13:44:05.651947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.241 [2024-10-01 13:44:05.652150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.241 [2024-10-01 13:44:05.652185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.241 [2024-10-01 13:44:05.652204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.241 [2024-10-01 13:44:05.652221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.241 [2024-10-01 13:44:05.652236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.241 [2024-10-01 13:44:05.652250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.241 [2024-10-01 13:44:05.653345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.241 [2024-10-01 13:44:05.653385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.241 [2024-10-01 13:44:05.661313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.241 [2024-10-01 13:44:05.661362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.241 [2024-10-01 13:44:05.661461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.241 [2024-10-01 13:44:05.661493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.241 [2024-10-01 13:44:05.661511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.241 [2024-10-01 13:44:05.661575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.241 [2024-10-01 13:44:05.661603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.241 [2024-10-01 13:44:05.661620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.241 [2024-10-01 13:44:05.661655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.241 [2024-10-01 13:44:05.661678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.241 [2024-10-01 13:44:05.661705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.241 [2024-10-01 13:44:05.661722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.241 [2024-10-01 13:44:05.661737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.241 [2024-10-01 13:44:05.661754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.241 [2024-10-01 13:44:05.661769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.241 [2024-10-01 13:44:05.661783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.241 [2024-10-01 13:44:05.661814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.241 [2024-10-01 13:44:05.661833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.241 8642.08 IOPS, 33.76 MiB/s [2024-10-01 13:44:05.672352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.241 [2024-10-01 13:44:05.672400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.241 [2024-10-01 13:44:05.672498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.241 [2024-10-01 13:44:05.672529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.241 [2024-10-01 13:44:05.672563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.241 [2024-10-01 13:44:05.672617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.242 [2024-10-01 13:44:05.672642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.242 [2024-10-01 13:44:05.672659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.242 [2024-10-01 13:44:05.672692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.242 [2024-10-01 13:44:05.672715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.242 [2024-10-01 13:44:05.672764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.242 [2024-10-01 13:44:05.672784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.242 [2024-10-01 13:44:05.672798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.242 [2024-10-01 13:44:05.672815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.242 [2024-10-01 13:44:05.672830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.242 [2024-10-01 13:44:05.672843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.242 [2024-10-01 13:44:05.672875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.242 [2024-10-01 13:44:05.672895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.242 [2024-10-01 13:44:05.682484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.242 [2024-10-01 13:44:05.682580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.242 [2024-10-01 13:44:05.682665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.242 [2024-10-01 13:44:05.682713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.242 [2024-10-01 13:44:05.682733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.242 [2024-10-01 13:44:05.683042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.242 [2024-10-01 13:44:05.683084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.242 [2024-10-01 13:44:05.683104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.242 [2024-10-01 13:44:05.683124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.242 [2024-10-01 13:44:05.683269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.242 [2024-10-01 13:44:05.683298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.242 [2024-10-01 13:44:05.683313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.242 [2024-10-01 13:44:05.683328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.242 [2024-10-01 13:44:05.683440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.242 [2024-10-01 13:44:05.683463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.242 [2024-10-01 13:44:05.683478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.242 [2024-10-01 13:44:05.683492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.242 [2024-10-01 13:44:05.683530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.242 [2024-10-01 13:44:05.693060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.242 [2024-10-01 13:44:05.693121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.242 [2024-10-01 13:44:05.693231] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.242 [2024-10-01 13:44:05.693264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.242 [2024-10-01 13:44:05.693283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.242 [2024-10-01 13:44:05.693366] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.242 [2024-10-01 13:44:05.693393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.242 [2024-10-01 13:44:05.693410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.242 [2024-10-01 13:44:05.694510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.242 [2024-10-01 13:44:05.694569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.242 [2024-10-01 13:44:05.694806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.242 [2024-10-01 13:44:05.694845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.242 [2024-10-01 13:44:05.694863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.242 [2024-10-01 13:44:05.694881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.242 [2024-10-01 13:44:05.694897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.242 [2024-10-01 13:44:05.694910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.242 [2024-10-01 13:44:05.694953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.242 [2024-10-01 13:44:05.694975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.242 [2024-10-01 13:44:05.703199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.242 [2024-10-01 13:44:05.703279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.242 [2024-10-01 13:44:05.703361] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.242 [2024-10-01 13:44:05.703392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.242 [2024-10-01 13:44:05.703410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.242 [2024-10-01 13:44:05.704394] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.242 [2024-10-01 13:44:05.704438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.242 [2024-10-01 13:44:05.704459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.242 [2024-10-01 13:44:05.704478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.242 [2024-10-01 13:44:05.704692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.242 [2024-10-01 13:44:05.704733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.242 [2024-10-01 13:44:05.704751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.242 [2024-10-01 13:44:05.704766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.242 [2024-10-01 13:44:05.704808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.243 [2024-10-01 13:44:05.704830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.243 [2024-10-01 13:44:05.704845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.243 [2024-10-01 13:44:05.704859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.243 [2024-10-01 13:44:05.704909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.243 [2024-10-01 13:44:05.715586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.243 [2024-10-01 13:44:05.715636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.243 [2024-10-01 13:44:05.715734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.243 [2024-10-01 13:44:05.715765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.243 [2024-10-01 13:44:05.715783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.243 [2024-10-01 13:44:05.715832] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.243 [2024-10-01 13:44:05.715856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.243 [2024-10-01 13:44:05.715873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.243 [2024-10-01 13:44:05.715919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.243 [2024-10-01 13:44:05.715943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.243 [2024-10-01 13:44:05.715970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.243 [2024-10-01 13:44:05.715988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.243 [2024-10-01 13:44:05.716003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.243 [2024-10-01 13:44:05.716019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.243 [2024-10-01 13:44:05.716035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.243 [2024-10-01 13:44:05.716048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.243 [2024-10-01 13:44:05.716081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.243 [2024-10-01 13:44:05.716100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.243 [2024-10-01 13:44:05.725819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.243 [2024-10-01 13:44:05.725903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.243 [2024-10-01 13:44:05.726026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.243 [2024-10-01 13:44:05.726061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.243 [2024-10-01 13:44:05.726081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.243 [2024-10-01 13:44:05.726131] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.243 [2024-10-01 13:44:05.726156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.243 [2024-10-01 13:44:05.726173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.243 [2024-10-01 13:44:05.726218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.243 [2024-10-01 13:44:05.726241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.243 [2024-10-01 13:44:05.726269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.243 [2024-10-01 13:44:05.726307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.243 [2024-10-01 13:44:05.726324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.243 [2024-10-01 13:44:05.726342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.243 [2024-10-01 13:44:05.726357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.243 [2024-10-01 13:44:05.726370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.243 [2024-10-01 13:44:05.726656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.243 [2024-10-01 13:44:05.726685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.243 [2024-10-01 13:44:05.736624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.243 [2024-10-01 13:44:05.736673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.243 [2024-10-01 13:44:05.736776] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.243 [2024-10-01 13:44:05.736808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.243 [2024-10-01 13:44:05.736826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.243 [2024-10-01 13:44:05.736876] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.243 [2024-10-01 13:44:05.736901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.243 [2024-10-01 13:44:05.736917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.243 [2024-10-01 13:44:05.736951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.243 [2024-10-01 13:44:05.736974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.243 [2024-10-01 13:44:05.738061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.243 [2024-10-01 13:44:05.738101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.243 [2024-10-01 13:44:05.738120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.243 [2024-10-01 13:44:05.738138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.243 [2024-10-01 13:44:05.738153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.243 [2024-10-01 13:44:05.738167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.243 [2024-10-01 13:44:05.738396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.243 [2024-10-01 13:44:05.738424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.243 [2024-10-01 13:44:05.747490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.243 [2024-10-01 13:44:05.747553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.243 [2024-10-01 13:44:05.747654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.244 [2024-10-01 13:44:05.747701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.244 [2024-10-01 13:44:05.747721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.244 [2024-10-01 13:44:05.747772] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.244 [2024-10-01 13:44:05.747814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.244 [2024-10-01 13:44:05.747833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.244 [2024-10-01 13:44:05.747868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.244 [2024-10-01 13:44:05.747903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.244 [2024-10-01 13:44:05.747932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.244 [2024-10-01 13:44:05.747950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.244 [2024-10-01 13:44:05.747965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.244 [2024-10-01 13:44:05.747982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.244 [2024-10-01 13:44:05.747997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.244 [2024-10-01 13:44:05.748010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.244 [2024-10-01 13:44:05.748042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.244 [2024-10-01 13:44:05.748062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.244 [2024-10-01 13:44:05.758422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.244 [2024-10-01 13:44:05.758473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.244 [2024-10-01 13:44:05.758584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.244 [2024-10-01 13:44:05.758617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.244 [2024-10-01 13:44:05.758635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.244 [2024-10-01 13:44:05.758686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.244 [2024-10-01 13:44:05.758711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.244 [2024-10-01 13:44:05.758728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.244 [2024-10-01 13:44:05.758762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.244 [2024-10-01 13:44:05.758786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.244 [2024-10-01 13:44:05.758813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.244 [2024-10-01 13:44:05.758831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.244 [2024-10-01 13:44:05.758845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.244 [2024-10-01 13:44:05.758861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.244 [2024-10-01 13:44:05.758877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.244 [2024-10-01 13:44:05.758890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.244 [2024-10-01 13:44:05.758922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.244 [2024-10-01 13:44:05.758941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.244 [2024-10-01 13:44:05.768570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.244 [2024-10-01 13:44:05.768645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.244 [2024-10-01 13:44:05.768728] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.244 [2024-10-01 13:44:05.768760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.244 [2024-10-01 13:44:05.768779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.244 [2024-10-01 13:44:05.768845] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.244 [2024-10-01 13:44:05.768873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.244 [2024-10-01 13:44:05.768889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.244 [2024-10-01 13:44:05.768908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.244 [2024-10-01 13:44:05.768941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.244 [2024-10-01 13:44:05.768962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.244 [2024-10-01 13:44:05.768976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.244 [2024-10-01 13:44:05.768991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.244 [2024-10-01 13:44:05.769253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.244 [2024-10-01 13:44:05.769281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.244 [2024-10-01 13:44:05.769297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.244 [2024-10-01 13:44:05.769311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.244 [2024-10-01 13:44:05.769447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.244 [2024-10-01 13:44:05.779185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.244 [2024-10-01 13:44:05.779234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.244 [2024-10-01 13:44:05.779334] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.244 [2024-10-01 13:44:05.779372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.244 [2024-10-01 13:44:05.779391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.244 [2024-10-01 13:44:05.779442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.244 [2024-10-01 13:44:05.779472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.244 [2024-10-01 13:44:05.779489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.244 [2024-10-01 13:44:05.779522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.244 [2024-10-01 13:44:05.779563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.244 [2024-10-01 13:44:05.780662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.244 [2024-10-01 13:44:05.780700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.244 [2024-10-01 13:44:05.780738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.245 [2024-10-01 13:44:05.780758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.245 [2024-10-01 13:44:05.780773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.245 [2024-10-01 13:44:05.780788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.245 [2024-10-01 13:44:05.781017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.245 [2024-10-01 13:44:05.781055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.245 [2024-10-01 13:44:05.790026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.245 [2024-10-01 13:44:05.790075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.245 [2024-10-01 13:44:05.790173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.245 [2024-10-01 13:44:05.790204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.245 [2024-10-01 13:44:05.790222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.245 [2024-10-01 13:44:05.790272] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.245 [2024-10-01 13:44:05.790297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.245 [2024-10-01 13:44:05.790314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.245 [2024-10-01 13:44:05.790347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.245 [2024-10-01 13:44:05.790370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.245 [2024-10-01 13:44:05.790397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.245 [2024-10-01 13:44:05.790415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.245 [2024-10-01 13:44:05.790429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.245 [2024-10-01 13:44:05.790446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.245 [2024-10-01 13:44:05.790461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.245 [2024-10-01 13:44:05.790474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.245 [2024-10-01 13:44:05.790506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.245 [2024-10-01 13:44:05.790526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.245 [2024-10-01 13:44:05.800933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.245 [2024-10-01 13:44:05.800983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.245 [2024-10-01 13:44:05.801081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.245 [2024-10-01 13:44:05.801113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.245 [2024-10-01 13:44:05.801131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.245 [2024-10-01 13:44:05.801181] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.245 [2024-10-01 13:44:05.801206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.245 [2024-10-01 13:44:05.801242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.245 [2024-10-01 13:44:05.801278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.245 [2024-10-01 13:44:05.801301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.245 [2024-10-01 13:44:05.801328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.245 [2024-10-01 13:44:05.801346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.245 [2024-10-01 13:44:05.801361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.245 [2024-10-01 13:44:05.801378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.245 [2024-10-01 13:44:05.801393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.245 [2024-10-01 13:44:05.801407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.245 [2024-10-01 13:44:05.801439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.245 [2024-10-01 13:44:05.801458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.245 [2024-10-01 13:44:05.811067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.245 [2024-10-01 13:44:05.811117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.245 [2024-10-01 13:44:05.811214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.245 [2024-10-01 13:44:05.811246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.245 [2024-10-01 13:44:05.811263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.245 [2024-10-01 13:44:05.811313] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.245 [2024-10-01 13:44:05.811338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.245 [2024-10-01 13:44:05.811354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.245 [2024-10-01 13:44:05.811387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.245 [2024-10-01 13:44:05.811411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.245 [2024-10-01 13:44:05.811683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.246 [2024-10-01 13:44:05.811722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.246 [2024-10-01 13:44:05.811740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.246 [2024-10-01 13:44:05.811758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.246 [2024-10-01 13:44:05.811773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.246 [2024-10-01 13:44:05.811787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.246 [2024-10-01 13:44:05.811944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.246 [2024-10-01 13:44:05.811970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.246 [2024-10-01 13:44:05.821587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.246 [2024-10-01 13:44:05.821653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.246 [2024-10-01 13:44:05.821755] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.246 [2024-10-01 13:44:05.821787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.246 [2024-10-01 13:44:05.821805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.246 [2024-10-01 13:44:05.821855] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.246 [2024-10-01 13:44:05.821880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.246 [2024-10-01 13:44:05.821897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.246 [2024-10-01 13:44:05.821929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.246 [2024-10-01 13:44:05.821953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.246 [2024-10-01 13:44:05.823039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.246 [2024-10-01 13:44:05.823078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.246 [2024-10-01 13:44:05.823097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.246 [2024-10-01 13:44:05.823114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.246 [2024-10-01 13:44:05.823129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.246 [2024-10-01 13:44:05.823143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.246 [2024-10-01 13:44:05.823362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.246 [2024-10-01 13:44:05.823389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.246 [2024-10-01 13:44:05.832821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.246 [2024-10-01 13:44:05.832906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.246 [2024-10-01 13:44:05.832989] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.246 [2024-10-01 13:44:05.833029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.246 [2024-10-01 13:44:05.833049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.246 [2024-10-01 13:44:05.833119] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.246 [2024-10-01 13:44:05.833154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.246 [2024-10-01 13:44:05.833170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.246 [2024-10-01 13:44:05.833190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.246 [2024-10-01 13:44:05.833224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.246 [2024-10-01 13:44:05.833244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.246 [2024-10-01 13:44:05.833259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.246 [2024-10-01 13:44:05.833273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.246 [2024-10-01 13:44:05.833329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.246 [2024-10-01 13:44:05.833351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.246 [2024-10-01 13:44:05.833365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.246 [2024-10-01 13:44:05.833379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.246 [2024-10-01 13:44:05.833409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.246 [2024-10-01 13:44:05.843834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.246 [2024-10-01 13:44:05.843892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.246 [2024-10-01 13:44:05.843993] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.246 [2024-10-01 13:44:05.844025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.246 [2024-10-01 13:44:05.844043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.246 [2024-10-01 13:44:05.844094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.246 [2024-10-01 13:44:05.844118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.246 [2024-10-01 13:44:05.844135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.246 [2024-10-01 13:44:05.844168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.246 [2024-10-01 13:44:05.844191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.246 [2024-10-01 13:44:05.844218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.246 [2024-10-01 13:44:05.844235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.246 [2024-10-01 13:44:05.844250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.246 [2024-10-01 13:44:05.844266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.246 [2024-10-01 13:44:05.844282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.246 [2024-10-01 13:44:05.844296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.246 [2024-10-01 13:44:05.844328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.246 [2024-10-01 13:44:05.844347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.246 [2024-10-01 13:44:05.854287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.246 [2024-10-01 13:44:05.854336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.246 [2024-10-01 13:44:05.854433] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.246 [2024-10-01 13:44:05.854465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.246 [2024-10-01 13:44:05.854483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.246 [2024-10-01 13:44:05.854546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.246 [2024-10-01 13:44:05.854574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.246 [2024-10-01 13:44:05.854591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.246 [2024-10-01 13:44:05.854647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.246 [2024-10-01 13:44:05.854672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.246 [2024-10-01 13:44:05.854929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.246 [2024-10-01 13:44:05.854967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.246 [2024-10-01 13:44:05.854985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.246 [2024-10-01 13:44:05.855002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.246 [2024-10-01 13:44:05.855018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.246 [2024-10-01 13:44:05.855033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.246 [2024-10-01 13:44:05.855177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.246 [2024-10-01 13:44:05.855203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.247 [2024-10-01 13:44:05.865194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.247 [2024-10-01 13:44:05.865244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.247 [2024-10-01 13:44:05.865346] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.247 [2024-10-01 13:44:05.865379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.247 [2024-10-01 13:44:05.865397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.247 [2024-10-01 13:44:05.865447] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.247 [2024-10-01 13:44:05.865472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.247 [2024-10-01 13:44:05.865488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.247 [2024-10-01 13:44:05.865520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.247 [2024-10-01 13:44:05.865560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.247 [2024-10-01 13:44:05.866658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.247 [2024-10-01 13:44:05.866697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.247 [2024-10-01 13:44:05.866715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.247 [2024-10-01 13:44:05.866732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.247 [2024-10-01 13:44:05.866748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.247 [2024-10-01 13:44:05.866761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.247 [2024-10-01 13:44:05.866987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.247 [2024-10-01 13:44:05.867024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.247 [2024-10-01 13:44:05.876264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.247 [2024-10-01 13:44:05.876321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.247 [2024-10-01 13:44:05.876461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.247 [2024-10-01 13:44:05.876494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.247 [2024-10-01 13:44:05.876513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.247 [2024-10-01 13:44:05.876582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.247 [2024-10-01 13:44:05.876609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.247 [2024-10-01 13:44:05.876626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.247 [2024-10-01 13:44:05.876661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.247 [2024-10-01 13:44:05.876686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.247 [2024-10-01 13:44:05.876714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.247 [2024-10-01 13:44:05.876731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.247 [2024-10-01 13:44:05.876746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.247 [2024-10-01 13:44:05.876764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.247 [2024-10-01 13:44:05.876780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.247 [2024-10-01 13:44:05.876793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.247 [2024-10-01 13:44:05.876825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.247 [2024-10-01 13:44:05.876844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.247 [2024-10-01 13:44:05.887764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.247 [2024-10-01 13:44:05.887847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.247 [2024-10-01 13:44:05.888018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.247 [2024-10-01 13:44:05.888067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.247 [2024-10-01 13:44:05.888094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.247 [2024-10-01 13:44:05.888151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.247 [2024-10-01 13:44:05.888176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.247 [2024-10-01 13:44:05.888193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.247 [2024-10-01 13:44:05.888228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.247 [2024-10-01 13:44:05.888253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.247 [2024-10-01 13:44:05.888280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.247 [2024-10-01 13:44:05.888298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.247 [2024-10-01 13:44:05.888314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.247 [2024-10-01 13:44:05.888332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.247 [2024-10-01 13:44:05.888368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.247 [2024-10-01 13:44:05.888385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.247 [2024-10-01 13:44:05.888448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.247 [2024-10-01 13:44:05.888473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.247 [2024-10-01 13:44:05.897966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.247 [2024-10-01 13:44:05.898044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.247 [2024-10-01 13:44:05.898129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.247 [2024-10-01 13:44:05.898172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.247 [2024-10-01 13:44:05.898193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.247 [2024-10-01 13:44:05.898262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.247 [2024-10-01 13:44:05.898290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.247 [2024-10-01 13:44:05.898307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.247 [2024-10-01 13:44:05.898327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.247 [2024-10-01 13:44:05.898617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.247 [2024-10-01 13:44:05.898658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.247 [2024-10-01 13:44:05.898676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.247 [2024-10-01 13:44:05.898691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.247 [2024-10-01 13:44:05.898823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.247 [2024-10-01 13:44:05.898847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.247 [2024-10-01 13:44:05.898861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.247 [2024-10-01 13:44:05.898876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.247 [2024-10-01 13:44:05.898988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.247 [2024-10-01 13:44:05.908669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.247 [2024-10-01 13:44:05.908723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.247 [2024-10-01 13:44:05.908825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.247 [2024-10-01 13:44:05.908857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.247 [2024-10-01 13:44:05.908875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.247 [2024-10-01 13:44:05.908925] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.247 [2024-10-01 13:44:05.908950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.247 [2024-10-01 13:44:05.908966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.247 [2024-10-01 13:44:05.909000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.247 [2024-10-01 13:44:05.909047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.247 [2024-10-01 13:44:05.910143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.247 [2024-10-01 13:44:05.910184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.247 [2024-10-01 13:44:05.910203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.247 [2024-10-01 13:44:05.910221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.247 [2024-10-01 13:44:05.910237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.247 [2024-10-01 13:44:05.910250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.247 [2024-10-01 13:44:05.910491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.247 [2024-10-01 13:44:05.910530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.247 [2024-10-01 13:44:05.919524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.247 [2024-10-01 13:44:05.919587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.247 [2024-10-01 13:44:05.919686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.248 [2024-10-01 13:44:05.919718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.248 [2024-10-01 13:44:05.919737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.248 [2024-10-01 13:44:05.919786] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.248 [2024-10-01 13:44:05.919811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.248 [2024-10-01 13:44:05.919828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.248 [2024-10-01 13:44:05.919861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.248 [2024-10-01 13:44:05.919897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.248 [2024-10-01 13:44:05.919927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.248 [2024-10-01 13:44:05.919945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.248 [2024-10-01 13:44:05.919959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.248 [2024-10-01 13:44:05.919976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.248 [2024-10-01 13:44:05.919991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.248 [2024-10-01 13:44:05.920005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.248 [2024-10-01 13:44:05.920037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.248 [2024-10-01 13:44:05.920055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.248 [2024-10-01 13:44:05.930444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.248 [2024-10-01 13:44:05.930501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.248 [2024-10-01 13:44:05.930622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.248 [2024-10-01 13:44:05.930654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.248 [2024-10-01 13:44:05.930693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.248 [2024-10-01 13:44:05.930749] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.248 [2024-10-01 13:44:05.930775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.248 [2024-10-01 13:44:05.930792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.248 [2024-10-01 13:44:05.930826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.248 [2024-10-01 13:44:05.930849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.248 [2024-10-01 13:44:05.930876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.248 [2024-10-01 13:44:05.930894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.248 [2024-10-01 13:44:05.930908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.248 [2024-10-01 13:44:05.930926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.248 [2024-10-01 13:44:05.930941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.248 [2024-10-01 13:44:05.930955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.248 [2024-10-01 13:44:05.930986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.248 [2024-10-01 13:44:05.931006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.248 [2024-10-01 13:44:05.940615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.248 [2024-10-01 13:44:05.940672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.248 [2024-10-01 13:44:05.940774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.248 [2024-10-01 13:44:05.940806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.248 [2024-10-01 13:44:05.940825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.248 [2024-10-01 13:44:05.940875] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.248 [2024-10-01 13:44:05.940899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.248 [2024-10-01 13:44:05.940916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.248 [2024-10-01 13:44:05.940949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.248 [2024-10-01 13:44:05.940972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.248 [2024-10-01 13:44:05.940999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.248 [2024-10-01 13:44:05.941017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.248 [2024-10-01 13:44:05.941032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.248 [2024-10-01 13:44:05.941049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.248 [2024-10-01 13:44:05.941064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.248 [2024-10-01 13:44:05.941094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.248 [2024-10-01 13:44:05.941360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.248 [2024-10-01 13:44:05.941388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.248 [2024-10-01 13:44:05.951259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.248 [2024-10-01 13:44:05.951312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.248 [2024-10-01 13:44:05.951412] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.248 [2024-10-01 13:44:05.951444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.248 [2024-10-01 13:44:05.951462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.248 [2024-10-01 13:44:05.951512] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.248 [2024-10-01 13:44:05.951550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.248 [2024-10-01 13:44:05.951571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.248 [2024-10-01 13:44:05.951605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.248 [2024-10-01 13:44:05.951634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.248 [2024-10-01 13:44:05.952738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.248 [2024-10-01 13:44:05.952779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.248 [2024-10-01 13:44:05.952797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.248 [2024-10-01 13:44:05.952815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.248 [2024-10-01 13:44:05.952831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.248 [2024-10-01 13:44:05.952844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.248 [2024-10-01 13:44:05.953066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.248 [2024-10-01 13:44:05.953093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.248 [2024-10-01 13:44:05.962121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.248 [2024-10-01 13:44:05.962180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.248 [2024-10-01 13:44:05.962287] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.248 [2024-10-01 13:44:05.962320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.248 [2024-10-01 13:44:05.962338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.248 [2024-10-01 13:44:05.962387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.248 [2024-10-01 13:44:05.962412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.248 [2024-10-01 13:44:05.962429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.248 [2024-10-01 13:44:05.962463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.248 [2024-10-01 13:44:05.962486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.248 [2024-10-01 13:44:05.962555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.249 [2024-10-01 13:44:05.962578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.249 [2024-10-01 13:44:05.962593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.249 [2024-10-01 13:44:05.962610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.249 [2024-10-01 13:44:05.962625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.249 [2024-10-01 13:44:05.962639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.249 [2024-10-01 13:44:05.962672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.249 [2024-10-01 13:44:05.962692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.249 [2024-10-01 13:44:05.973123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.249 [2024-10-01 13:44:05.973183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.249 [2024-10-01 13:44:05.973290] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.249 [2024-10-01 13:44:05.973323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.249 [2024-10-01 13:44:05.973341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.249 [2024-10-01 13:44:05.973391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.249 [2024-10-01 13:44:05.973416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.249 [2024-10-01 13:44:05.973432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.249 [2024-10-01 13:44:05.973467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.249 [2024-10-01 13:44:05.973490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.249 [2024-10-01 13:44:05.973517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.249 [2024-10-01 13:44:05.973549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.249 [2024-10-01 13:44:05.973568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.249 [2024-10-01 13:44:05.973586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.249 [2024-10-01 13:44:05.973601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.249 [2024-10-01 13:44:05.973614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.249 [2024-10-01 13:44:05.973647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.249 [2024-10-01 13:44:05.973667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.249 [2024-10-01 13:44:05.983261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.249 [2024-10-01 13:44:05.983337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.249 [2024-10-01 13:44:05.983419] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.249 [2024-10-01 13:44:05.983449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.249 [2024-10-01 13:44:05.983467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.249 [2024-10-01 13:44:05.983580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.249 [2024-10-01 13:44:05.983610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.249 [2024-10-01 13:44:05.983626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.249 [2024-10-01 13:44:05.983645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.249 [2024-10-01 13:44:05.983933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.249 [2024-10-01 13:44:05.983970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.249 [2024-10-01 13:44:05.983987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.249 [2024-10-01 13:44:05.984002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.249 [2024-10-01 13:44:05.984147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.249 [2024-10-01 13:44:05.984173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.249 [2024-10-01 13:44:05.984188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.249 [2024-10-01 13:44:05.984203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.249 [2024-10-01 13:44:05.984317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.249 [2024-10-01 13:44:05.993913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.249 [2024-10-01 13:44:05.993991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.249 [2024-10-01 13:44:05.994112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.249 [2024-10-01 13:44:05.994146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.249 [2024-10-01 13:44:05.994165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.249 [2024-10-01 13:44:05.994216] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.249 [2024-10-01 13:44:05.994241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.249 [2024-10-01 13:44:05.994258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.249 [2024-10-01 13:44:05.995416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.249 [2024-10-01 13:44:05.995466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.249 [2024-10-01 13:44:05.995709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.249 [2024-10-01 13:44:05.995748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.249 [2024-10-01 13:44:05.995768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.249 [2024-10-01 13:44:05.995787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.249 [2024-10-01 13:44:05.995803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.249 [2024-10-01 13:44:05.995817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.249 [2024-10-01 13:44:05.996903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.249 [2024-10-01 13:44:05.996965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.249 [2024-10-01 13:44:06.004771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.250 [2024-10-01 13:44:06.004822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.250 [2024-10-01 13:44:06.004923] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.250 [2024-10-01 13:44:06.004956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.250 [2024-10-01 13:44:06.004974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.250 [2024-10-01 13:44:06.005024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.250 [2024-10-01 13:44:06.005049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.250 [2024-10-01 13:44:06.005065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.250 [2024-10-01 13:44:06.005099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.250 [2024-10-01 13:44:06.005123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.250 [2024-10-01 13:44:06.005150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.250 [2024-10-01 13:44:06.005168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.250 [2024-10-01 13:44:06.005183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.250 [2024-10-01 13:44:06.005200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.250 [2024-10-01 13:44:06.005215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.250 [2024-10-01 13:44:06.005228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.250 [2024-10-01 13:44:06.005260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.250 [2024-10-01 13:44:06.005279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.250 [2024-10-01 13:44:06.015610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.250 [2024-10-01 13:44:06.015665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.250 [2024-10-01 13:44:06.015765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.250 [2024-10-01 13:44:06.015798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.250 [2024-10-01 13:44:06.015816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.250 [2024-10-01 13:44:06.015866] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.250 [2024-10-01 13:44:06.015903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.250 [2024-10-01 13:44:06.015921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.250 [2024-10-01 13:44:06.015955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.250 [2024-10-01 13:44:06.015978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.250 [2024-10-01 13:44:06.016005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.250 [2024-10-01 13:44:06.016042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.250 [2024-10-01 13:44:06.016058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.250 [2024-10-01 13:44:06.016075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.250 [2024-10-01 13:44:06.016091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.250 [2024-10-01 13:44:06.016105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.250 [2024-10-01 13:44:06.016138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.250 [2024-10-01 13:44:06.016158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.250 [2024-10-01 13:44:06.025742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.250 [2024-10-01 13:44:06.025817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.250 [2024-10-01 13:44:06.025901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.250 [2024-10-01 13:44:06.025933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.250 [2024-10-01 13:44:06.025951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.250 [2024-10-01 13:44:06.026018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.250 [2024-10-01 13:44:06.026046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.250 [2024-10-01 13:44:06.026062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.250 [2024-10-01 13:44:06.026081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.250 [2024-10-01 13:44:06.026346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.250 [2024-10-01 13:44:06.026387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.250 [2024-10-01 13:44:06.026405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.250 [2024-10-01 13:44:06.026419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.250 [2024-10-01 13:44:06.026582] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.250 [2024-10-01 13:44:06.026610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.250 [2024-10-01 13:44:06.026625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.250 [2024-10-01 13:44:06.026640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.250 [2024-10-01 13:44:06.026749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.250 [2024-10-01 13:44:06.036296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.250 [2024-10-01 13:44:06.036344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.250 [2024-10-01 13:44:06.036444] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.250 [2024-10-01 13:44:06.036481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.250 [2024-10-01 13:44:06.036501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.250 [2024-10-01 13:44:06.036568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.250 [2024-10-01 13:44:06.036613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.251 [2024-10-01 13:44:06.036632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.251 [2024-10-01 13:44:06.037725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.251 [2024-10-01 13:44:06.037770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.251 [2024-10-01 13:44:06.038000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.251 [2024-10-01 13:44:06.038038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.251 [2024-10-01 13:44:06.038056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.251 [2024-10-01 13:44:06.038074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.251 [2024-10-01 13:44:06.038089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.251 [2024-10-01 13:44:06.038103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.251 [2024-10-01 13:44:06.039171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.251 [2024-10-01 13:44:06.039208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.251 [2024-10-01 13:44:06.047088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.251 [2024-10-01 13:44:06.047137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.251 [2024-10-01 13:44:06.047236] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.251 [2024-10-01 13:44:06.047270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.251 [2024-10-01 13:44:06.047288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.251 [2024-10-01 13:44:06.047338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.251 [2024-10-01 13:44:06.047363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.251 [2024-10-01 13:44:06.047379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.251 [2024-10-01 13:44:06.047413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.251 [2024-10-01 13:44:06.047436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.251 [2024-10-01 13:44:06.047463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.251 [2024-10-01 13:44:06.047481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.251 [2024-10-01 13:44:06.047495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.251 [2024-10-01 13:44:06.047511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.251 [2024-10-01 13:44:06.047526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.251 [2024-10-01 13:44:06.047559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.251 [2024-10-01 13:44:06.047595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.251 [2024-10-01 13:44:06.047615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.251 [2024-10-01 13:44:06.057985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.251 [2024-10-01 13:44:06.058036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.251 [2024-10-01 13:44:06.058133] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.251 [2024-10-01 13:44:06.058165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.251 [2024-10-01 13:44:06.058183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.251 [2024-10-01 13:44:06.058233] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.251 [2024-10-01 13:44:06.058258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.251 [2024-10-01 13:44:06.058274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.251 [2024-10-01 13:44:06.058307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.251 [2024-10-01 13:44:06.058330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.251 [2024-10-01 13:44:06.058357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.251 [2024-10-01 13:44:06.058376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.251 [2024-10-01 13:44:06.058390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.251 [2024-10-01 13:44:06.058407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.251 [2024-10-01 13:44:06.058423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.251 [2024-10-01 13:44:06.058436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.251 [2024-10-01 13:44:06.058468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.251 [2024-10-01 13:44:06.058487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.251 [2024-10-01 13:44:06.068116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.251 [2024-10-01 13:44:06.068191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.251 [2024-10-01 13:44:06.068273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.251 [2024-10-01 13:44:06.068303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.251 [2024-10-01 13:44:06.068321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.251 [2024-10-01 13:44:06.068388] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.252 [2024-10-01 13:44:06.068416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.252 [2024-10-01 13:44:06.068432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.252 [2024-10-01 13:44:06.068451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.252 [2024-10-01 13:44:06.068732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.252 [2024-10-01 13:44:06.068773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.252 [2024-10-01 13:44:06.068791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.252 [2024-10-01 13:44:06.068825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.252 [2024-10-01 13:44:06.068973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.252 [2024-10-01 13:44:06.069000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.252 [2024-10-01 13:44:06.069015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.252 [2024-10-01 13:44:06.069030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.252 [2024-10-01 13:44:06.069140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.252 [2024-10-01 13:44:06.078827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.252 [2024-10-01 13:44:06.078899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.252 [2024-10-01 13:44:06.079021] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.252 [2024-10-01 13:44:06.079056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.252 [2024-10-01 13:44:06.079075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.252 [2024-10-01 13:44:06.079126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.252 [2024-10-01 13:44:06.079151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.252 [2024-10-01 13:44:06.079168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.252 [2024-10-01 13:44:06.080283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.252 [2024-10-01 13:44:06.080329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.252 [2024-10-01 13:44:06.080581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.252 [2024-10-01 13:44:06.080619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.252 [2024-10-01 13:44:06.080638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.252 [2024-10-01 13:44:06.080657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.252 [2024-10-01 13:44:06.080673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.252 [2024-10-01 13:44:06.080686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.252 [2024-10-01 13:44:06.081758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.252 [2024-10-01 13:44:06.081796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.252 [2024-10-01 13:44:06.089702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.252 [2024-10-01 13:44:06.089751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.252 [2024-10-01 13:44:06.089848] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.252 [2024-10-01 13:44:06.089880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.252 [2024-10-01 13:44:06.089898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.252 [2024-10-01 13:44:06.089948] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.252 [2024-10-01 13:44:06.089973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.252 [2024-10-01 13:44:06.090015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.252 [2024-10-01 13:44:06.090051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.252 [2024-10-01 13:44:06.090075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.252 [2024-10-01 13:44:06.090103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.252 [2024-10-01 13:44:06.090121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.252 [2024-10-01 13:44:06.090135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.252 [2024-10-01 13:44:06.090152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.252 [2024-10-01 13:44:06.090168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.252 [2024-10-01 13:44:06.090181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.252 [2024-10-01 13:44:06.090213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.252 [2024-10-01 13:44:06.090232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.252 [2024-10-01 13:44:06.100600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.252 [2024-10-01 13:44:06.100675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.252 [2024-10-01 13:44:06.100793] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.252 [2024-10-01 13:44:06.100826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.252 [2024-10-01 13:44:06.100845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.252 [2024-10-01 13:44:06.100895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.252 [2024-10-01 13:44:06.100920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.252 [2024-10-01 13:44:06.100937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.252 [2024-10-01 13:44:06.100972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.252 [2024-10-01 13:44:06.100996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.252 [2024-10-01 13:44:06.101023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.252 [2024-10-01 13:44:06.101040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.252 [2024-10-01 13:44:06.101056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.253 [2024-10-01 13:44:06.101073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.253 [2024-10-01 13:44:06.101088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.253 [2024-10-01 13:44:06.101102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.253 [2024-10-01 13:44:06.101134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.253 [2024-10-01 13:44:06.101154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.253 [2024-10-01 13:44:06.110760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.253 [2024-10-01 13:44:06.110867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.253 [2024-10-01 13:44:06.110954] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.253 [2024-10-01 13:44:06.110986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.253 [2024-10-01 13:44:06.111004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.253 [2024-10-01 13:44:06.111071] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.253 [2024-10-01 13:44:06.111099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.253 [2024-10-01 13:44:06.111116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.253 [2024-10-01 13:44:06.111135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.253 [2024-10-01 13:44:06.111399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.253 [2024-10-01 13:44:06.111439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.253 [2024-10-01 13:44:06.111456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.253 [2024-10-01 13:44:06.111471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.253 [2024-10-01 13:44:06.111634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.253 [2024-10-01 13:44:06.111662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.253 [2024-10-01 13:44:06.111677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.253 [2024-10-01 13:44:06.111691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.253 [2024-10-01 13:44:06.111801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.253 [2024-10-01 13:44:06.121318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.253 [2024-10-01 13:44:06.121373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.253 [2024-10-01 13:44:06.121474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.253 [2024-10-01 13:44:06.121505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.253 [2024-10-01 13:44:06.121523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.253 [2024-10-01 13:44:06.121591] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.253 [2024-10-01 13:44:06.121618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.253 [2024-10-01 13:44:06.121634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.253 [2024-10-01 13:44:06.121668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.253 [2024-10-01 13:44:06.121691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.253 [2024-10-01 13:44:06.122776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.253 [2024-10-01 13:44:06.122816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.253 [2024-10-01 13:44:06.122834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.253 [2024-10-01 13:44:06.122872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.253 [2024-10-01 13:44:06.122890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.253 [2024-10-01 13:44:06.122904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.253 [2024-10-01 13:44:06.123125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.253 [2024-10-01 13:44:06.123164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.253 [2024-10-01 13:44:06.132129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.253 [2024-10-01 13:44:06.132178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.253 [2024-10-01 13:44:06.132275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.253 [2024-10-01 13:44:06.132307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.253 [2024-10-01 13:44:06.132324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.253 [2024-10-01 13:44:06.132374] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.253 [2024-10-01 13:44:06.132399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.253 [2024-10-01 13:44:06.132416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.253 [2024-10-01 13:44:06.132449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.253 [2024-10-01 13:44:06.132472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.253 [2024-10-01 13:44:06.132499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.253 [2024-10-01 13:44:06.132517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.253 [2024-10-01 13:44:06.132532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.253 [2024-10-01 13:44:06.132568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.253 [2024-10-01 13:44:06.132584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.253 [2024-10-01 13:44:06.132598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.253 [2024-10-01 13:44:06.132631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.253 [2024-10-01 13:44:06.132650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.253 [2024-10-01 13:44:06.143037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.253 [2024-10-01 13:44:06.143092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.253 [2024-10-01 13:44:06.143205] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.253 [2024-10-01 13:44:06.143240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.253 [2024-10-01 13:44:06.143259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.254 [2024-10-01 13:44:06.143310] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.254 [2024-10-01 13:44:06.143335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.254 [2024-10-01 13:44:06.143351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.254 [2024-10-01 13:44:06.143407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.254 [2024-10-01 13:44:06.143432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.254 [2024-10-01 13:44:06.143460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.254 [2024-10-01 13:44:06.143477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.254 [2024-10-01 13:44:06.143492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.254 [2024-10-01 13:44:06.143509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.254 [2024-10-01 13:44:06.143525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.254 [2024-10-01 13:44:06.143555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.254 [2024-10-01 13:44:06.143591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.254 [2024-10-01 13:44:06.143611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.254 [2024-10-01 13:44:06.153186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.254 [2024-10-01 13:44:06.153266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.254 [2024-10-01 13:44:06.153351] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.254 [2024-10-01 13:44:06.153392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.254 [2024-10-01 13:44:06.153413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.254 [2024-10-01 13:44:06.153735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.254 [2024-10-01 13:44:06.153777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.254 [2024-10-01 13:44:06.153797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.254 [2024-10-01 13:44:06.153817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.254 [2024-10-01 13:44:06.153963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.254 [2024-10-01 13:44:06.153993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.254 [2024-10-01 13:44:06.154009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.254 [2024-10-01 13:44:06.154023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.254 [2024-10-01 13:44:06.154139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.254 [2024-10-01 13:44:06.154169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.254 [2024-10-01 13:44:06.154185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.254 [2024-10-01 13:44:06.154200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.254 [2024-10-01 13:44:06.154242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.254 [2024-10-01 13:44:06.163610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.254 [2024-10-01 13:44:06.163662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.254 [2024-10-01 13:44:06.163789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.254 [2024-10-01 13:44:06.163833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.254 [2024-10-01 13:44:06.163854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.254 [2024-10-01 13:44:06.163923] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.254 [2024-10-01 13:44:06.163951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.254 [2024-10-01 13:44:06.163968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.254 [2024-10-01 13:44:06.165073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.254 [2024-10-01 13:44:06.165123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.254 [2024-10-01 13:44:06.165381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.254 [2024-10-01 13:44:06.165421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.254 [2024-10-01 13:44:06.165439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.254 [2024-10-01 13:44:06.165458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.254 [2024-10-01 13:44:06.165474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.254 [2024-10-01 13:44:06.165488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.254 [2024-10-01 13:44:06.166586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.254 [2024-10-01 13:44:06.166625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.254 [2024-10-01 13:44:06.174595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.254 [2024-10-01 13:44:06.174658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.254 [2024-10-01 13:44:06.174785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.254 [2024-10-01 13:44:06.174819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.254 [2024-10-01 13:44:06.174838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.254 [2024-10-01 13:44:06.174889] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.254 [2024-10-01 13:44:06.174914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.254 [2024-10-01 13:44:06.174931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.254 [2024-10-01 13:44:06.174966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.254 [2024-10-01 13:44:06.174989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.254 [2024-10-01 13:44:06.175016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.254 [2024-10-01 13:44:06.175034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.254 [2024-10-01 13:44:06.175050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.254 [2024-10-01 13:44:06.175068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.254 [2024-10-01 13:44:06.175107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.254 [2024-10-01 13:44:06.175125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.254 [2024-10-01 13:44:06.175174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.255 [2024-10-01 13:44:06.175197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.255 [2024-10-01 13:44:06.185632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.255 [2024-10-01 13:44:06.185692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.255 [2024-10-01 13:44:06.185796] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.255 [2024-10-01 13:44:06.185836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.255 [2024-10-01 13:44:06.185856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.255 [2024-10-01 13:44:06.185909] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.255 [2024-10-01 13:44:06.185934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.255 [2024-10-01 13:44:06.185951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.255 [2024-10-01 13:44:06.185986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.255 [2024-10-01 13:44:06.186010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.255 [2024-10-01 13:44:06.186037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.255 [2024-10-01 13:44:06.186054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.255 [2024-10-01 13:44:06.186069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.255 [2024-10-01 13:44:06.186086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.255 [2024-10-01 13:44:06.186101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.255 [2024-10-01 13:44:06.186115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.255 [2024-10-01 13:44:06.186147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.255 [2024-10-01 13:44:06.186166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.255 [2024-10-01 13:44:06.195775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.255 [2024-10-01 13:44:06.195855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.255 [2024-10-01 13:44:06.195956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.255 [2024-10-01 13:44:06.195990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.255 [2024-10-01 13:44:06.196009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.255 [2024-10-01 13:44:06.196326] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.255 [2024-10-01 13:44:06.196370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.255 [2024-10-01 13:44:06.196390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.255 [2024-10-01 13:44:06.196411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.255 [2024-10-01 13:44:06.196604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.255 [2024-10-01 13:44:06.196639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.255 [2024-10-01 13:44:06.196655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.255 [2024-10-01 13:44:06.196670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.255 [2024-10-01 13:44:06.196783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.255 [2024-10-01 13:44:06.196806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.255 [2024-10-01 13:44:06.196820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.255 [2024-10-01 13:44:06.196834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.255 [2024-10-01 13:44:06.196873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.255 [2024-10-01 13:44:06.206194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.255 [2024-10-01 13:44:06.206263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.255 [2024-10-01 13:44:06.206378] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.255 [2024-10-01 13:44:06.206412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.255 [2024-10-01 13:44:06.206431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.255 [2024-10-01 13:44:06.206482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.255 [2024-10-01 13:44:06.206507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.255 [2024-10-01 13:44:06.206524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.255 [2024-10-01 13:44:06.207662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.255 [2024-10-01 13:44:06.207709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.255 [2024-10-01 13:44:06.207971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.255 [2024-10-01 13:44:06.208010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.255 [2024-10-01 13:44:06.208030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.255 [2024-10-01 13:44:06.208049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.255 [2024-10-01 13:44:06.208064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.255 [2024-10-01 13:44:06.208079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.255 [2024-10-01 13:44:06.209185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.255 [2024-10-01 13:44:06.209227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.255 [2024-10-01 13:44:06.217181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.255 [2024-10-01 13:44:06.217235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.255 [2024-10-01 13:44:06.217351] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.255 [2024-10-01 13:44:06.217396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.255 [2024-10-01 13:44:06.217448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.255 [2024-10-01 13:44:06.217505] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.255 [2024-10-01 13:44:06.217531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.255 [2024-10-01 13:44:06.217569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.255 [2024-10-01 13:44:06.217606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.255 [2024-10-01 13:44:06.217630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.255 [2024-10-01 13:44:06.217657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.255 [2024-10-01 13:44:06.217675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.255 [2024-10-01 13:44:06.217690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.255 [2024-10-01 13:44:06.217707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.255 [2024-10-01 13:44:06.217723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.255 [2024-10-01 13:44:06.217737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.255 [2024-10-01 13:44:06.217769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.255 [2024-10-01 13:44:06.217789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.255 [2024-10-01 13:44:06.228254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.255 [2024-10-01 13:44:06.228331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.255 [2024-10-01 13:44:06.228453] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.255 [2024-10-01 13:44:06.228488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.255 [2024-10-01 13:44:06.228507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.255 [2024-10-01 13:44:06.228575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.256 [2024-10-01 13:44:06.228603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.256 [2024-10-01 13:44:06.228620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.256 [2024-10-01 13:44:06.228656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.256 [2024-10-01 13:44:06.228680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.256 [2024-10-01 13:44:06.228708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.256 [2024-10-01 13:44:06.228726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.256 [2024-10-01 13:44:06.228741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.256 [2024-10-01 13:44:06.228759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.256 [2024-10-01 13:44:06.228774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.256 [2024-10-01 13:44:06.228813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.256 [2024-10-01 13:44:06.229592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.256 [2024-10-01 13:44:06.229633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.256 [2024-10-01 13:44:06.238418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.256 [2024-10-01 13:44:06.238495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.256 [2024-10-01 13:44:06.238594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.256 [2024-10-01 13:44:06.238626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.256 [2024-10-01 13:44:06.238644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.256 [2024-10-01 13:44:06.238949] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.256 [2024-10-01 13:44:06.238991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.256 [2024-10-01 13:44:06.239011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.256 [2024-10-01 13:44:06.239031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.256 [2024-10-01 13:44:06.239186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.256 [2024-10-01 13:44:06.239228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.256 [2024-10-01 13:44:06.239246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.256 [2024-10-01 13:44:06.239260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.256 [2024-10-01 13:44:06.239373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.256 [2024-10-01 13:44:06.239396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.256 [2024-10-01 13:44:06.239410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.256 [2024-10-01 13:44:06.239425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.256 [2024-10-01 13:44:06.239464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.256 [2024-10-01 13:44:06.248738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.256 [2024-10-01 13:44:06.248790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.256 [2024-10-01 13:44:06.248890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.256 [2024-10-01 13:44:06.248922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.256 [2024-10-01 13:44:06.248940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.256 [2024-10-01 13:44:06.248991] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.256 [2024-10-01 13:44:06.249016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.256 [2024-10-01 13:44:06.249032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.256 [2024-10-01 13:44:06.250140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.256 [2024-10-01 13:44:06.250189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.256 [2024-10-01 13:44:06.250444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.256 [2024-10-01 13:44:06.250484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.256 [2024-10-01 13:44:06.250502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.256 [2024-10-01 13:44:06.250521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.256 [2024-10-01 13:44:06.250552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.256 [2024-10-01 13:44:06.250568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.256 [2024-10-01 13:44:06.251648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.256 [2024-10-01 13:44:06.251687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.256 [2024-10-01 13:44:06.259560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.256 [2024-10-01 13:44:06.259611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.256 [2024-10-01 13:44:06.259711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.256 [2024-10-01 13:44:06.259744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.256 [2024-10-01 13:44:06.259762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.256 [2024-10-01 13:44:06.259813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.256 [2024-10-01 13:44:06.259838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.256 [2024-10-01 13:44:06.259855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.256 [2024-10-01 13:44:06.259901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.256 [2024-10-01 13:44:06.259928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.256 [2024-10-01 13:44:06.259955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.256 [2024-10-01 13:44:06.259973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.256 [2024-10-01 13:44:06.259987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.256 [2024-10-01 13:44:06.260004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.256 [2024-10-01 13:44:06.260020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.257 [2024-10-01 13:44:06.260033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.257 [2024-10-01 13:44:06.260065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.257 [2024-10-01 13:44:06.260085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.257 [2024-10-01 13:44:06.270416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.257 [2024-10-01 13:44:06.270470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.257 [2024-10-01 13:44:06.270584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.257 [2024-10-01 13:44:06.270618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.257 [2024-10-01 13:44:06.270637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.257 [2024-10-01 13:44:06.270713] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.257 [2024-10-01 13:44:06.270739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.257 [2024-10-01 13:44:06.270756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.257 [2024-10-01 13:44:06.270791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.257 [2024-10-01 13:44:06.270815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.257 [2024-10-01 13:44:06.270842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.257 [2024-10-01 13:44:06.270860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.257 [2024-10-01 13:44:06.270874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.257 [2024-10-01 13:44:06.270892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.257 [2024-10-01 13:44:06.270908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.257 [2024-10-01 13:44:06.270921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.257 [2024-10-01 13:44:06.270954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.257 [2024-10-01 13:44:06.270974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.257 [2024-10-01 13:44:06.280568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.257 [2024-10-01 13:44:06.280650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.257 [2024-10-01 13:44:06.280747] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.257 [2024-10-01 13:44:06.280784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.257 [2024-10-01 13:44:06.280804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.257 [2024-10-01 13:44:06.280873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.257 [2024-10-01 13:44:06.280902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.257 [2024-10-01 13:44:06.280918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.257 [2024-10-01 13:44:06.280938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.257 [2024-10-01 13:44:06.281226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.257 [2024-10-01 13:44:06.281269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.257 [2024-10-01 13:44:06.281287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.257 [2024-10-01 13:44:06.281303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.257 [2024-10-01 13:44:06.281436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.257 [2024-10-01 13:44:06.281461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.257 [2024-10-01 13:44:06.281476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.257 [2024-10-01 13:44:06.281506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.257 [2024-10-01 13:44:06.281637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.257 [2024-10-01 13:44:06.291321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.257 [2024-10-01 13:44:06.291377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.257 [2024-10-01 13:44:06.291481] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.257 [2024-10-01 13:44:06.291513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.257 [2024-10-01 13:44:06.291532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.257 [2024-10-01 13:44:06.291604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.257 [2024-10-01 13:44:06.291630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.257 [2024-10-01 13:44:06.291647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.257 [2024-10-01 13:44:06.291682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.257 [2024-10-01 13:44:06.291706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.257 [2024-10-01 13:44:06.292819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.257 [2024-10-01 13:44:06.292862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.258 [2024-10-01 13:44:06.292881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.258 [2024-10-01 13:44:06.292901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.258 [2024-10-01 13:44:06.292916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.258 [2024-10-01 13:44:06.292929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.258 [2024-10-01 13:44:06.293176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.258 [2024-10-01 13:44:06.293217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.258 [2024-10-01 13:44:06.302283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.258 [2024-10-01 13:44:06.302340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.258 [2024-10-01 13:44:06.302448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.258 [2024-10-01 13:44:06.302481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.258 [2024-10-01 13:44:06.302500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.258 [2024-10-01 13:44:06.302566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.258 [2024-10-01 13:44:06.302594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.258 [2024-10-01 13:44:06.302611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.258 [2024-10-01 13:44:06.302647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.258 [2024-10-01 13:44:06.302671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.258 [2024-10-01 13:44:06.302698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.258 [2024-10-01 13:44:06.302745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.258 [2024-10-01 13:44:06.302762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.258 [2024-10-01 13:44:06.302780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.258 [2024-10-01 13:44:06.302795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.258 [2024-10-01 13:44:06.302808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.258 [2024-10-01 13:44:06.302841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.258 [2024-10-01 13:44:06.302862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.258 [2024-10-01 13:44:06.313431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.258 [2024-10-01 13:44:06.313487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.258 [2024-10-01 13:44:06.313617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.258 [2024-10-01 13:44:06.313665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.258 [2024-10-01 13:44:06.313685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.258 [2024-10-01 13:44:06.313738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.258 [2024-10-01 13:44:06.313763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.258 [2024-10-01 13:44:06.313779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.258 [2024-10-01 13:44:06.313814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.258 [2024-10-01 13:44:06.313837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.258 [2024-10-01 13:44:06.313864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.258 [2024-10-01 13:44:06.313882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.258 [2024-10-01 13:44:06.313897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.258 [2024-10-01 13:44:06.313915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.258 [2024-10-01 13:44:06.313930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.258 [2024-10-01 13:44:06.313944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.258 [2024-10-01 13:44:06.313975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.258 [2024-10-01 13:44:06.313995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.258 [2024-10-01 13:44:06.324007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.258 [2024-10-01 13:44:06.324063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.258 [2024-10-01 13:44:06.324189] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.258 [2024-10-01 13:44:06.324235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.258 [2024-10-01 13:44:06.324256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.258 [2024-10-01 13:44:06.324309] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.258 [2024-10-01 13:44:06.324346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.258 [2024-10-01 13:44:06.324376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.258 [2024-10-01 13:44:06.324413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.258 [2024-10-01 13:44:06.324437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.258 [2024-10-01 13:44:06.324464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.258 [2024-10-01 13:44:06.324482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.258 [2024-10-01 13:44:06.324496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.258 [2024-10-01 13:44:06.324514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.258 [2024-10-01 13:44:06.324528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.258 [2024-10-01 13:44:06.324559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.258 [2024-10-01 13:44:06.324827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.258 [2024-10-01 13:44:06.324854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.258 [2024-10-01 13:44:06.335517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.258 [2024-10-01 13:44:06.335590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.259 [2024-10-01 13:44:06.335843] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.259 [2024-10-01 13:44:06.335902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.259 [2024-10-01 13:44:06.335925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.259 [2024-10-01 13:44:06.335980] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.259 [2024-10-01 13:44:06.336006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.259 [2024-10-01 13:44:06.336033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.259 [2024-10-01 13:44:06.337158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.259 [2024-10-01 13:44:06.337206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.259 [2024-10-01 13:44:06.337435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.259 [2024-10-01 13:44:06.337473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.259 [2024-10-01 13:44:06.337492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.259 [2024-10-01 13:44:06.337512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.259 [2024-10-01 13:44:06.337528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.259 [2024-10-01 13:44:06.337557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.259 [2024-10-01 13:44:06.337602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.259 [2024-10-01 13:44:06.337625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.259 [2024-10-01 13:44:06.345683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.259 [2024-10-01 13:44:06.345801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.259 [2024-10-01 13:44:06.345914] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.259 [2024-10-01 13:44:06.345949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.259 [2024-10-01 13:44:06.345975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.259 [2024-10-01 13:44:06.346046] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.259 [2024-10-01 13:44:06.346074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.259 [2024-10-01 13:44:06.346090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.259 [2024-10-01 13:44:06.346111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.259 [2024-10-01 13:44:06.346159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.259 [2024-10-01 13:44:06.346189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.259 [2024-10-01 13:44:06.346205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.259 [2024-10-01 13:44:06.346221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.259 [2024-10-01 13:44:06.346255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.259 [2024-10-01 13:44:06.346276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.259 [2024-10-01 13:44:06.346290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.259 [2024-10-01 13:44:06.346304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.259 [2024-10-01 13:44:06.347261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.259 [2024-10-01 13:44:06.355816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.259 [2024-10-01 13:44:06.355951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.259 [2024-10-01 13:44:06.355996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.259 [2024-10-01 13:44:06.356017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.259 [2024-10-01 13:44:06.356322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.259 [2024-10-01 13:44:06.356484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.259 [2024-10-01 13:44:06.356531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.259 [2024-10-01 13:44:06.356568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.259 [2024-10-01 13:44:06.356583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.259 [2024-10-01 13:44:06.356700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.259 [2024-10-01 13:44:06.356775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.259 [2024-10-01 13:44:06.356805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.259 [2024-10-01 13:44:06.356847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.259 [2024-10-01 13:44:06.356892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.259 [2024-10-01 13:44:06.356927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.259 [2024-10-01 13:44:06.356945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.259 [2024-10-01 13:44:06.356959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.259 [2024-10-01 13:44:06.356991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.259 [2024-10-01 13:44:06.367071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.259 [2024-10-01 13:44:06.367125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.259 [2024-10-01 13:44:06.368016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.259 [2024-10-01 13:44:06.368064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.259 [2024-10-01 13:44:06.368085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.259 [2024-10-01 13:44:06.368148] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.259 [2024-10-01 13:44:06.368183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.259 [2024-10-01 13:44:06.368201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.260 [2024-10-01 13:44:06.368386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.260 [2024-10-01 13:44:06.368428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.260 [2024-10-01 13:44:06.368505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.260 [2024-10-01 13:44:06.368526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.260 [2024-10-01 13:44:06.368557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.260 [2024-10-01 13:44:06.368577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.260 [2024-10-01 13:44:06.368593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.260 [2024-10-01 13:44:06.368606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.260 [2024-10-01 13:44:06.368641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.260 [2024-10-01 13:44:06.368661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.260 [2024-10-01 13:44:06.377222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.260 [2024-10-01 13:44:06.377301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.260 [2024-10-01 13:44:06.377387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.260 [2024-10-01 13:44:06.377417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.260 [2024-10-01 13:44:06.377441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.260 [2024-10-01 13:44:06.378243] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.260 [2024-10-01 13:44:06.378290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.260 [2024-10-01 13:44:06.378334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.260 [2024-10-01 13:44:06.378357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.260 [2024-10-01 13:44:06.378565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.260 [2024-10-01 13:44:06.378604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.260 [2024-10-01 13:44:06.378622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.260 [2024-10-01 13:44:06.378636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.260 [2024-10-01 13:44:06.378681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.260 [2024-10-01 13:44:06.378703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.260 [2024-10-01 13:44:06.378717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.260 [2024-10-01 13:44:06.378731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.260 [2024-10-01 13:44:06.379666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.260 [2024-10-01 13:44:06.387695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.260 [2024-10-01 13:44:06.387759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.260 [2024-10-01 13:44:06.387872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.260 [2024-10-01 13:44:06.387956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.260 [2024-10-01 13:44:06.387976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.260 [2024-10-01 13:44:06.388032] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.260 [2024-10-01 13:44:06.388057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.260 [2024-10-01 13:44:06.388074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.260 [2024-10-01 13:44:06.388108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.260 [2024-10-01 13:44:06.388139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.260 [2024-10-01 13:44:06.388180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.260 [2024-10-01 13:44:06.388200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.260 [2024-10-01 13:44:06.388214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.260 [2024-10-01 13:44:06.388232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.260 [2024-10-01 13:44:06.388247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.260 [2024-10-01 13:44:06.388261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.260 [2024-10-01 13:44:06.388293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.260 [2024-10-01 13:44:06.388313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.260 [2024-10-01 13:44:06.397846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.260 [2024-10-01 13:44:06.399016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.260 [2024-10-01 13:44:06.399131] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.260 [2024-10-01 13:44:06.399184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.260 [2024-10-01 13:44:06.399207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.260 [2024-10-01 13:44:06.399470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.260 [2024-10-01 13:44:06.399513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.260 [2024-10-01 13:44:06.399547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.260 [2024-10-01 13:44:06.399571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.260 [2024-10-01 13:44:06.400699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.260 [2024-10-01 13:44:06.400744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.260 [2024-10-01 13:44:06.400763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.260 [2024-10-01 13:44:06.400778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.260 [2024-10-01 13:44:06.401411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.260 [2024-10-01 13:44:06.401452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.260 [2024-10-01 13:44:06.401471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.261 [2024-10-01 13:44:06.401486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.261 [2024-10-01 13:44:06.401831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.261 [2024-10-01 13:44:06.408481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.261 [2024-10-01 13:44:06.408621] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.261 [2024-10-01 13:44:06.408665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.261 [2024-10-01 13:44:06.408685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.261 [2024-10-01 13:44:06.408720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.261 [2024-10-01 13:44:06.408753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.261 [2024-10-01 13:44:06.408770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.261 [2024-10-01 13:44:06.408785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.261 [2024-10-01 13:44:06.408817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.261 [2024-10-01 13:44:06.409105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.261 [2024-10-01 13:44:06.409217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.261 [2024-10-01 13:44:06.409255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.261 [2024-10-01 13:44:06.409275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.261 [2024-10-01 13:44:06.409325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.261 [2024-10-01 13:44:06.409358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.261 [2024-10-01 13:44:06.409376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.261 [2024-10-01 13:44:06.409390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.261 [2024-10-01 13:44:06.409422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.261 [2024-10-01 13:44:06.419608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.261 [2024-10-01 13:44:06.419700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.261 [2024-10-01 13:44:06.419834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.261 [2024-10-01 13:44:06.419870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.261 [2024-10-01 13:44:06.419906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.261 [2024-10-01 13:44:06.419963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.261 [2024-10-01 13:44:06.419989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.261 [2024-10-01 13:44:06.420006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.261 [2024-10-01 13:44:06.420042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.261 [2024-10-01 13:44:06.420066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.261 [2024-10-01 13:44:06.420093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.261 [2024-10-01 13:44:06.420112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.261 [2024-10-01 13:44:06.420133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.261 [2024-10-01 13:44:06.420163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.261 [2024-10-01 13:44:06.420181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.261 [2024-10-01 13:44:06.420195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.261 [2024-10-01 13:44:06.420231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.261 [2024-10-01 13:44:06.420251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.261 [2024-10-01 13:44:06.430063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.261 [2024-10-01 13:44:06.430161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.261 [2024-10-01 13:44:06.430298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.261 [2024-10-01 13:44:06.430334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.261 [2024-10-01 13:44:06.430353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.261 [2024-10-01 13:44:06.430404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.261 [2024-10-01 13:44:06.430429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.261 [2024-10-01 13:44:06.430446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.261 [2024-10-01 13:44:06.430768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.261 [2024-10-01 13:44:06.430813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.261 [2024-10-01 13:44:06.430980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.261 [2024-10-01 13:44:06.431017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.261 [2024-10-01 13:44:06.431037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.261 [2024-10-01 13:44:06.431055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.261 [2024-10-01 13:44:06.431070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.261 [2024-10-01 13:44:06.431084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.261 [2024-10-01 13:44:06.431207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.261 [2024-10-01 13:44:06.431235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.261 [2024-10-01 13:44:06.440751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.261 [2024-10-01 13:44:06.440804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.261 [2024-10-01 13:44:06.440904] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.261 [2024-10-01 13:44:06.440937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.261 [2024-10-01 13:44:06.440955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.261 [2024-10-01 13:44:06.441005] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.261 [2024-10-01 13:44:06.441030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.261 [2024-10-01 13:44:06.441046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.262 [2024-10-01 13:44:06.441080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.262 [2024-10-01 13:44:06.441104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.262 [2024-10-01 13:44:06.442209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.262 [2024-10-01 13:44:06.442253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.262 [2024-10-01 13:44:06.442272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.262 [2024-10-01 13:44:06.442290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.262 [2024-10-01 13:44:06.442306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.262 [2024-10-01 13:44:06.442319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.262 [2024-10-01 13:44:06.442563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.262 [2024-10-01 13:44:06.442594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.262 [2024-10-01 13:44:06.451581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.262 [2024-10-01 13:44:06.451632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.262 [2024-10-01 13:44:06.451754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.262 [2024-10-01 13:44:06.451798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.262 [2024-10-01 13:44:06.451819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.262 [2024-10-01 13:44:06.451871] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.262 [2024-10-01 13:44:06.451912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.262 [2024-10-01 13:44:06.451930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.262 [2024-10-01 13:44:06.451965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.262 [2024-10-01 13:44:06.451989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.262 [2024-10-01 13:44:06.452016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.262 [2024-10-01 13:44:06.452034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.262 [2024-10-01 13:44:06.452048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.262 [2024-10-01 13:44:06.452065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.262 [2024-10-01 13:44:06.452080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.262 [2024-10-01 13:44:06.452094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.262 [2024-10-01 13:44:06.452128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.262 [2024-10-01 13:44:06.452159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.262 [2024-10-01 13:44:06.462631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.262 [2024-10-01 13:44:06.462686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.262 [2024-10-01 13:44:06.462789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.262 [2024-10-01 13:44:06.462822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.262 [2024-10-01 13:44:06.462840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.262 [2024-10-01 13:44:06.462891] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.262 [2024-10-01 13:44:06.462916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.262 [2024-10-01 13:44:06.462932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.262 [2024-10-01 13:44:06.462966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.262 [2024-10-01 13:44:06.462990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.262 [2024-10-01 13:44:06.463016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.262 [2024-10-01 13:44:06.463034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.262 [2024-10-01 13:44:06.463049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.262 [2024-10-01 13:44:06.463066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.262 [2024-10-01 13:44:06.463101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.262 [2024-10-01 13:44:06.463117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.262 [2024-10-01 13:44:06.463166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.262 [2024-10-01 13:44:06.463190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.262 [2024-10-01 13:44:06.472773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.262 [2024-10-01 13:44:06.472828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.262 [2024-10-01 13:44:06.472928] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.262 [2024-10-01 13:44:06.472961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.262 [2024-10-01 13:44:06.472979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.262 [2024-10-01 13:44:06.473030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.262 [2024-10-01 13:44:06.473055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.262 [2024-10-01 13:44:06.473071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.262 [2024-10-01 13:44:06.473105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.262 [2024-10-01 13:44:06.473133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.262 [2024-10-01 13:44:06.473178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.262 [2024-10-01 13:44:06.473199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.262 [2024-10-01 13:44:06.473213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.262 [2024-10-01 13:44:06.473231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.262 [2024-10-01 13:44:06.473246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.262 [2024-10-01 13:44:06.473259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.262 [2024-10-01 13:44:06.473305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.262 [2024-10-01 13:44:06.473328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.262 [2024-10-01 13:44:06.483734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.262 [2024-10-01 13:44:06.483787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.262 [2024-10-01 13:44:06.483903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.262 [2024-10-01 13:44:06.483939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.262 [2024-10-01 13:44:06.483957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.262 [2024-10-01 13:44:06.484009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.262 [2024-10-01 13:44:06.484034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.262 [2024-10-01 13:44:06.484050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.262 [2024-10-01 13:44:06.485163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.262 [2024-10-01 13:44:06.485233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.262 [2024-10-01 13:44:06.485449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.262 [2024-10-01 13:44:06.485487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.262 [2024-10-01 13:44:06.485505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.262 [2024-10-01 13:44:06.485524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.262 [2024-10-01 13:44:06.485554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.262 [2024-10-01 13:44:06.485569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.262 [2024-10-01 13:44:06.486663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.263 [2024-10-01 13:44:06.486702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.263 [2024-10-01 13:44:06.493868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.263 [2024-10-01 13:44:06.494867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.263 [2024-10-01 13:44:06.494974] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.263 [2024-10-01 13:44:06.495012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.263 [2024-10-01 13:44:06.495032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.263 [2024-10-01 13:44:06.495278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.263 [2024-10-01 13:44:06.495322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.263 [2024-10-01 13:44:06.495342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.263 [2024-10-01 13:44:06.495361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.263 [2024-10-01 13:44:06.495414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.263 [2024-10-01 13:44:06.495438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.263 [2024-10-01 13:44:06.495453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.263 [2024-10-01 13:44:06.495468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.263 [2024-10-01 13:44:06.495503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.263 [2024-10-01 13:44:06.495524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.263 [2024-10-01 13:44:06.495557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.263 [2024-10-01 13:44:06.495574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.263 [2024-10-01 13:44:06.495606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.263 [2024-10-01 13:44:06.503967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.263 [2024-10-01 13:44:06.505400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.263 [2024-10-01 13:44:06.505448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.263 [2024-10-01 13:44:06.505488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.263 [2024-10-01 13:44:06.506456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.263 [2024-10-01 13:44:06.506616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.263 [2024-10-01 13:44:06.506667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.263 [2024-10-01 13:44:06.506687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.263 [2024-10-01 13:44:06.506702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.263 [2024-10-01 13:44:06.506737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.263 [2024-10-01 13:44:06.506804] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.263 [2024-10-01 13:44:06.506833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.263 [2024-10-01 13:44:06.506851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.263 [2024-10-01 13:44:06.506885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.263 [2024-10-01 13:44:06.506917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.263 [2024-10-01 13:44:06.506934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.263 [2024-10-01 13:44:06.506948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.263 [2024-10-01 13:44:06.506979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.263 [2024-10-01 13:44:06.514879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.263 [2024-10-01 13:44:06.515008] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.263 [2024-10-01 13:44:06.515051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.263 [2024-10-01 13:44:06.515072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.263 [2024-10-01 13:44:06.516206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.263 [2024-10-01 13:44:06.516936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.263 [2024-10-01 13:44:06.516978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.263 [2024-10-01 13:44:06.516997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.263 [2024-10-01 13:44:06.517105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.263 [2024-10-01 13:44:06.517160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.263 [2024-10-01 13:44:06.517525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.263 [2024-10-01 13:44:06.517583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.263 [2024-10-01 13:44:06.517604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.263 [2024-10-01 13:44:06.517751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.263 [2024-10-01 13:44:06.517898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.263 [2024-10-01 13:44:06.517951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.263 [2024-10-01 13:44:06.517969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.263 [2024-10-01 13:44:06.518013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.263 [2024-10-01 13:44:06.524984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.263 [2024-10-01 13:44:06.525106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.263 [2024-10-01 13:44:06.525179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.263 [2024-10-01 13:44:06.525204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.263 [2024-10-01 13:44:06.525239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.263 [2024-10-01 13:44:06.525272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.263 [2024-10-01 13:44:06.525290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.263 [2024-10-01 13:44:06.525305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.263 [2024-10-01 13:44:06.525338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.263 [2024-10-01 13:44:06.527729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.263 [2024-10-01 13:44:06.527849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.263 [2024-10-01 13:44:06.527903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.263 [2024-10-01 13:44:06.527926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.263 [2024-10-01 13:44:06.527962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.263 [2024-10-01 13:44:06.527994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.263 [2024-10-01 13:44:06.528012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.263 [2024-10-01 13:44:06.528033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.263 [2024-10-01 13:44:06.528064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.263 [2024-10-01 13:44:06.535583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.263 [2024-10-01 13:44:06.535707] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.264 [2024-10-01 13:44:06.535743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.264 [2024-10-01 13:44:06.535762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.264 [2024-10-01 13:44:06.535796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.264 [2024-10-01 13:44:06.535828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.264 [2024-10-01 13:44:06.535846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.264 [2024-10-01 13:44:06.535861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.264 [2024-10-01 13:44:06.535907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.264 [2024-10-01 13:44:06.538727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.264 [2024-10-01 13:44:06.539030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.264 [2024-10-01 13:44:06.539076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.264 [2024-10-01 13:44:06.539097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.264 [2024-10-01 13:44:06.539148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.264 [2024-10-01 13:44:06.539194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.264 [2024-10-01 13:44:06.539214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.264 [2024-10-01 13:44:06.539229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.264 [2024-10-01 13:44:06.539261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.264 [2024-10-01 13:44:06.545844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.264 [2024-10-01 13:44:06.545969] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.264 [2024-10-01 13:44:06.546004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.264 [2024-10-01 13:44:06.546024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.264 [2024-10-01 13:44:06.546058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.264 [2024-10-01 13:44:06.546091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.264 [2024-10-01 13:44:06.546109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.264 [2024-10-01 13:44:06.546126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.264 [2024-10-01 13:44:06.546173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.264 [2024-10-01 13:44:06.549986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.264 [2024-10-01 13:44:06.550107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.264 [2024-10-01 13:44:06.550151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.264 [2024-10-01 13:44:06.550179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.264 [2024-10-01 13:44:06.550215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.264 [2024-10-01 13:44:06.550248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.264 [2024-10-01 13:44:06.550266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.264 [2024-10-01 13:44:06.550280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.264 [2024-10-01 13:44:06.550312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.264 [2024-10-01 13:44:06.556484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.264 [2024-10-01 13:44:06.556622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.264 [2024-10-01 13:44:06.556665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.264 [2024-10-01 13:44:06.556685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.264 [2024-10-01 13:44:06.556739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.264 [2024-10-01 13:44:06.556773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.264 [2024-10-01 13:44:06.556791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.264 [2024-10-01 13:44:06.556805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.264 [2024-10-01 13:44:06.556838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.264 [2024-10-01 13:44:06.560082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.264 [2024-10-01 13:44:06.560214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.264 [2024-10-01 13:44:06.560258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.264 [2024-10-01 13:44:06.560279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.264 [2024-10-01 13:44:06.560314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.264 [2024-10-01 13:44:06.560347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.264 [2024-10-01 13:44:06.560366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.264 [2024-10-01 13:44:06.560381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.264 [2024-10-01 13:44:06.560660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.264 [2024-10-01 13:44:06.567363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.264 [2024-10-01 13:44:06.567485] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.264 [2024-10-01 13:44:06.567521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.264 [2024-10-01 13:44:06.567557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.264 [2024-10-01 13:44:06.567595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.264 [2024-10-01 13:44:06.567628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.264 [2024-10-01 13:44:06.567646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.264 [2024-10-01 13:44:06.567660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.264 [2024-10-01 13:44:06.567693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.264 [2024-10-01 13:44:06.570634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.264 [2024-10-01 13:44:06.570754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.264 [2024-10-01 13:44:06.570809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.264 [2024-10-01 13:44:06.570830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.264 [2024-10-01 13:44:06.570864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.264 [2024-10-01 13:44:06.570896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.264 [2024-10-01 13:44:06.570913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.264 [2024-10-01 13:44:06.570942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.264 [2024-10-01 13:44:06.572064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.264 [2024-10-01 13:44:06.578308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.264 [2024-10-01 13:44:06.578431] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.264 [2024-10-01 13:44:06.578474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.264 [2024-10-01 13:44:06.578496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.264 [2024-10-01 13:44:06.578530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.264 [2024-10-01 13:44:06.578581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.264 [2024-10-01 13:44:06.578600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.264 [2024-10-01 13:44:06.578614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.264 [2024-10-01 13:44:06.578647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.264 [2024-10-01 13:44:06.581521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.264 [2024-10-01 13:44:06.581653] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.264 [2024-10-01 13:44:06.581692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.264 [2024-10-01 13:44:06.581712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.264 [2024-10-01 13:44:06.581746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.264 [2024-10-01 13:44:06.581779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.264 [2024-10-01 13:44:06.581796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.264 [2024-10-01 13:44:06.581811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.264 [2024-10-01 13:44:06.581842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.264 [2024-10-01 13:44:06.588412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.264 [2024-10-01 13:44:06.588548] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.264 [2024-10-01 13:44:06.588598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.264 [2024-10-01 13:44:06.588619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.265 [2024-10-01 13:44:06.588653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.265 [2024-10-01 13:44:06.588686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.265 [2024-10-01 13:44:06.588703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.265 [2024-10-01 13:44:06.588717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.265 [2024-10-01 13:44:06.588750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.265 [2024-10-01 13:44:06.592551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.265 [2024-10-01 13:44:06.592671] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.265 [2024-10-01 13:44:06.592732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.265 [2024-10-01 13:44:06.592754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.265 [2024-10-01 13:44:06.592789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.265 [2024-10-01 13:44:06.592822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.265 [2024-10-01 13:44:06.592840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.265 [2024-10-01 13:44:06.592855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.265 [2024-10-01 13:44:06.592887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.265 [2024-10-01 13:44:06.599282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.265 [2024-10-01 13:44:06.599454] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.265 [2024-10-01 13:44:06.599489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.265 [2024-10-01 13:44:06.599508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.265 [2024-10-01 13:44:06.599562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.265 [2024-10-01 13:44:06.599599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.265 [2024-10-01 13:44:06.599617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.265 [2024-10-01 13:44:06.599632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.265 [2024-10-01 13:44:06.600766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.265 [2024-10-01 13:44:06.602862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.265 [2024-10-01 13:44:06.602980] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.265 [2024-10-01 13:44:06.603023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.265 [2024-10-01 13:44:06.603043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.265 [2024-10-01 13:44:06.603077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.265 [2024-10-01 13:44:06.603110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.265 [2024-10-01 13:44:06.603133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.265 [2024-10-01 13:44:06.603160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.265 [2024-10-01 13:44:06.603197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.265 [2024-10-01 13:44:06.610319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.265 [2024-10-01 13:44:06.610451] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.265 [2024-10-01 13:44:06.610485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.265 [2024-10-01 13:44:06.610504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.265 [2024-10-01 13:44:06.610553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.265 [2024-10-01 13:44:06.610620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.265 [2024-10-01 13:44:06.610640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.265 [2024-10-01 13:44:06.610655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.265 [2024-10-01 13:44:06.610693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.265 [2024-10-01 13:44:06.613624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.265 [2024-10-01 13:44:06.613755] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.265 [2024-10-01 13:44:06.613788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.265 [2024-10-01 13:44:06.613807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.265 [2024-10-01 13:44:06.613840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.265 [2024-10-01 13:44:06.613872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.265 [2024-10-01 13:44:06.613890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.265 [2024-10-01 13:44:06.613904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.265 [2024-10-01 13:44:06.613936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.265 [2024-10-01 13:44:06.621393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.265 [2024-10-01 13:44:06.621520] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.265 [2024-10-01 13:44:06.621570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.265 [2024-10-01 13:44:06.621590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.265 [2024-10-01 13:44:06.621625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.265 [2024-10-01 13:44:06.621658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.265 [2024-10-01 13:44:06.621676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.265 [2024-10-01 13:44:06.621690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.265 [2024-10-01 13:44:06.621723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.265 [2024-10-01 13:44:06.624599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.265 [2024-10-01 13:44:06.624720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.265 [2024-10-01 13:44:06.624754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.265 [2024-10-01 13:44:06.624774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.265 [2024-10-01 13:44:06.624808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.265 [2024-10-01 13:44:06.624841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.265 [2024-10-01 13:44:06.624859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.265 [2024-10-01 13:44:06.624874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.265 [2024-10-01 13:44:06.624906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.265 [2024-10-01 13:44:06.631497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.265 [2024-10-01 13:44:06.631639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.265 [2024-10-01 13:44:06.631679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.265 [2024-10-01 13:44:06.631699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.265 [2024-10-01 13:44:06.631733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.265 [2024-10-01 13:44:06.631765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.265 [2024-10-01 13:44:06.631783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.265 [2024-10-01 13:44:06.631798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.265 [2024-10-01 13:44:06.632077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.265 [2024-10-01 13:44:06.635569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.265 [2024-10-01 13:44:06.635692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.265 [2024-10-01 13:44:06.635728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.265 [2024-10-01 13:44:06.635746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.265 [2024-10-01 13:44:06.635780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.265 [2024-10-01 13:44:06.635813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.265 [2024-10-01 13:44:06.635831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.265 [2024-10-01 13:44:06.635845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.265 [2024-10-01 13:44:06.635876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.265 [2024-10-01 13:44:06.642079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.265 [2024-10-01 13:44:06.642214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.265 [2024-10-01 13:44:06.642269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.265 [2024-10-01 13:44:06.642290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.265 [2024-10-01 13:44:06.642325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.265 [2024-10-01 13:44:06.642358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.266 [2024-10-01 13:44:06.642376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.266 [2024-10-01 13:44:06.642391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.266 [2024-10-01 13:44:06.642424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.266 [2024-10-01 13:44:06.645669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.266 [2024-10-01 13:44:06.645795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.266 [2024-10-01 13:44:06.645835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.266 [2024-10-01 13:44:06.645881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.266 [2024-10-01 13:44:06.645917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.266 [2024-10-01 13:44:06.645951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.266 [2024-10-01 13:44:06.645969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.266 [2024-10-01 13:44:06.645983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.266 [2024-10-01 13:44:06.646254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.266 [2024-10-01 13:44:06.652904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.266 [2024-10-01 13:44:06.653026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.266 [2024-10-01 13:44:06.653061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.266 [2024-10-01 13:44:06.653079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.266 [2024-10-01 13:44:06.653114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.266 [2024-10-01 13:44:06.653162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.266 [2024-10-01 13:44:06.653184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.266 [2024-10-01 13:44:06.653199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.266 [2024-10-01 13:44:06.653231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.266 [2024-10-01 13:44:06.656215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.266 [2024-10-01 13:44:06.656336] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.266 [2024-10-01 13:44:06.656369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.266 [2024-10-01 13:44:06.656387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.266 [2024-10-01 13:44:06.656421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.266 [2024-10-01 13:44:06.656453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.266 [2024-10-01 13:44:06.656471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.266 [2024-10-01 13:44:06.656486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.266 [2024-10-01 13:44:06.656519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.266 [2024-10-01 13:44:06.663834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.266 [2024-10-01 13:44:06.663968] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.266 [2024-10-01 13:44:06.664013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.266 [2024-10-01 13:44:06.664033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.266 [2024-10-01 13:44:06.664068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.266 [2024-10-01 13:44:06.664101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.266 [2024-10-01 13:44:06.664161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.266 [2024-10-01 13:44:06.664180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.266 [2024-10-01 13:44:06.664216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.266 8661.36 IOPS, 33.83 MiB/s [2024-10-01 13:44:06.669939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.266 [2024-10-01 13:44:06.670999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.266 [2024-10-01 13:44:06.671047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.266 [2024-10-01 13:44:06.671069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.266 [2024-10-01 13:44:06.671274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.266 [2024-10-01 13:44:06.672437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.266 [2024-10-01 13:44:06.672478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.266 [2024-10-01 13:44:06.672497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.266 [2024-10-01 13:44:06.673769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.266 [2024-10-01 13:44:06.674925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.266 [2024-10-01 13:44:06.675053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.266 [2024-10-01 13:44:06.675095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.266 [2024-10-01 13:44:06.675116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.266 [2024-10-01 13:44:06.675182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.266 [2024-10-01 13:44:06.675222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.266 [2024-10-01 13:44:06.675240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.266 [2024-10-01 13:44:06.675254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.266 [2024-10-01 13:44:06.675287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.266 [2024-10-01 13:44:06.680035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.266 [2024-10-01 13:44:06.681072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.266 [2024-10-01 13:44:06.681119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.266 [2024-10-01 13:44:06.681152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.266 [2024-10-01 13:44:06.681358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.266 [2024-10-01 13:44:06.681418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.266 [2024-10-01 13:44:06.681441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.266 [2024-10-01 13:44:06.681456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.266 [2024-10-01 13:44:06.681490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.266 [2024-10-01 13:44:06.686200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.266 [2024-10-01 13:44:06.687387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.266 [2024-10-01 13:44:06.687435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.266 [2024-10-01 13:44:06.687456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.266 [2024-10-01 13:44:06.688097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.266 [2024-10-01 13:44:06.688225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.266 [2024-10-01 13:44:06.688261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.266 [2024-10-01 13:44:06.688279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.266 [2024-10-01 13:44:06.688315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.266 [2024-10-01 13:44:06.692239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.266 [2024-10-01 13:44:06.692369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.266 [2024-10-01 13:44:06.692404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.266 [2024-10-01 13:44:06.692423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.266 [2024-10-01 13:44:06.692458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.266 [2024-10-01 13:44:06.692491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.266 [2024-10-01 13:44:06.692509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.266 [2024-10-01 13:44:06.692524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.266 [2024-10-01 13:44:06.692574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.266 [2024-10-01 13:44:06.696318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.266 [2024-10-01 13:44:06.696470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.266 [2024-10-01 13:44:06.696513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.266 [2024-10-01 13:44:06.696547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.266 [2024-10-01 13:44:06.697838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.266 [2024-10-01 13:44:06.698096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.266 [2024-10-01 13:44:06.698138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.266 [2024-10-01 13:44:06.698165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.266 [2024-10-01 13:44:06.698962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.267 [2024-10-01 13:44:06.702507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.267 [2024-10-01 13:44:06.702646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.267 [2024-10-01 13:44:06.702691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.267 [2024-10-01 13:44:06.702712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.267 [2024-10-01 13:44:06.702778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.267 [2024-10-01 13:44:06.702812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.267 [2024-10-01 13:44:06.702830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.267 [2024-10-01 13:44:06.702844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.267 [2024-10-01 13:44:06.703107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.267 [2024-10-01 13:44:06.706629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.267 [2024-10-01 13:44:06.706751] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.267 [2024-10-01 13:44:06.706794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.267 [2024-10-01 13:44:06.706815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.267 [2024-10-01 13:44:06.706850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.267 [2024-10-01 13:44:06.706882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.267 [2024-10-01 13:44:06.706900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.267 [2024-10-01 13:44:06.706914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.267 [2024-10-01 13:44:06.706947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.267 [2024-10-01 13:44:06.713141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.267 [2024-10-01 13:44:06.713268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.267 [2024-10-01 13:44:06.713311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.267 [2024-10-01 13:44:06.713331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.267 [2024-10-01 13:44:06.713365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.267 [2024-10-01 13:44:06.713398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.267 [2024-10-01 13:44:06.713416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.267 [2024-10-01 13:44:06.713430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.267 [2024-10-01 13:44:06.713462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.267 [2024-10-01 13:44:06.716732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.267 [2024-10-01 13:44:06.716849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.267 [2024-10-01 13:44:06.716894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.267 [2024-10-01 13:44:06.716915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.267 [2024-10-01 13:44:06.716949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.267 [2024-10-01 13:44:06.716981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.267 [2024-10-01 13:44:06.716998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.267 [2024-10-01 13:44:06.717029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.267 [2024-10-01 13:44:06.717065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.267 [2024-10-01 13:44:06.723238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.267 [2024-10-01 13:44:06.723358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.267 [2024-10-01 13:44:06.723404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.267 [2024-10-01 13:44:06.723424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.267 [2024-10-01 13:44:06.724389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.267 [2024-10-01 13:44:06.724629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.267 [2024-10-01 13:44:06.724666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.267 [2024-10-01 13:44:06.724684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.267 [2024-10-01 13:44:06.724728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.267 [2024-10-01 13:44:06.727769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.267 [2024-10-01 13:44:06.727948] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.267 [2024-10-01 13:44:06.727992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.267 [2024-10-01 13:44:06.728013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.267 [2024-10-01 13:44:06.728047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.267 [2024-10-01 13:44:06.728080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.267 [2024-10-01 13:44:06.728097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.267 [2024-10-01 13:44:06.728112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.267 [2024-10-01 13:44:06.728154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.267 [2024-10-01 13:44:06.735466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.267 [2024-10-01 13:44:06.735604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.267 [2024-10-01 13:44:06.735647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.267 [2024-10-01 13:44:06.735668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.267 [2024-10-01 13:44:06.735702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.267 [2024-10-01 13:44:06.735735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.267 [2024-10-01 13:44:06.735753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.267 [2024-10-01 13:44:06.735767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.267 [2024-10-01 13:44:06.735799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.267 [2024-10-01 13:44:06.738713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.267 [2024-10-01 13:44:06.738857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.267 [2024-10-01 13:44:06.738919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.267 [2024-10-01 13:44:06.738957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.267 [2024-10-01 13:44:06.738994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.267 [2024-10-01 13:44:06.739028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.267 [2024-10-01 13:44:06.739045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.267 [2024-10-01 13:44:06.739060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.267 [2024-10-01 13:44:06.739093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.268 [2024-10-01 13:44:06.745592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.268 [2024-10-01 13:44:06.745715] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.268 [2024-10-01 13:44:06.745749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.268 [2024-10-01 13:44:06.745767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.268 [2024-10-01 13:44:06.745801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.268 [2024-10-01 13:44:06.745833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.268 [2024-10-01 13:44:06.745850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.268 [2024-10-01 13:44:06.745865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.268 [2024-10-01 13:44:06.745897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.268 [2024-10-01 13:44:06.749886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.268 [2024-10-01 13:44:06.750010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.268 [2024-10-01 13:44:06.750054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.268 [2024-10-01 13:44:06.750075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.268 [2024-10-01 13:44:06.750110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.268 [2024-10-01 13:44:06.750155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.268 [2024-10-01 13:44:06.750179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.268 [2024-10-01 13:44:06.750193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.268 [2024-10-01 13:44:06.750227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.268 [2024-10-01 13:44:06.756365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.268 [2024-10-01 13:44:06.756489] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.268 [2024-10-01 13:44:06.756548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.268 [2024-10-01 13:44:06.756572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.268 [2024-10-01 13:44:06.756608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.268 [2024-10-01 13:44:06.756662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.268 [2024-10-01 13:44:06.756682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.268 [2024-10-01 13:44:06.756696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.268 [2024-10-01 13:44:06.757800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.268 [2024-10-01 13:44:06.759988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.268 [2024-10-01 13:44:06.760107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.268 [2024-10-01 13:44:06.760158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.268 [2024-10-01 13:44:06.760182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.268 [2024-10-01 13:44:06.760217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.268 [2024-10-01 13:44:06.760487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.268 [2024-10-01 13:44:06.760523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.268 [2024-10-01 13:44:06.760557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.268 [2024-10-01 13:44:06.760705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.268 [2024-10-01 13:44:06.767256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.268 [2024-10-01 13:44:06.767376] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.268 [2024-10-01 13:44:06.767419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.268 [2024-10-01 13:44:06.767440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.268 [2024-10-01 13:44:06.767473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.268 [2024-10-01 13:44:06.767506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.268 [2024-10-01 13:44:06.767523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.268 [2024-10-01 13:44:06.767554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.268 [2024-10-01 13:44:06.767591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.268 [2024-10-01 13:44:06.770566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.268 [2024-10-01 13:44:06.770686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.268 [2024-10-01 13:44:06.770734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.269 [2024-10-01 13:44:06.770754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.269 [2024-10-01 13:44:06.770788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.269 [2024-10-01 13:44:06.770821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.269 [2024-10-01 13:44:06.770838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.269 [2024-10-01 13:44:06.770853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.269 [2024-10-01 13:44:06.770902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.269 [2024-10-01 13:44:06.778181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.269 [2024-10-01 13:44:06.778304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.269 [2024-10-01 13:44:06.778347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.269 [2024-10-01 13:44:06.778368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.269 [2024-10-01 13:44:06.778402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.269 [2024-10-01 13:44:06.778435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.269 [2024-10-01 13:44:06.778453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.269 [2024-10-01 13:44:06.778467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.269 [2024-10-01 13:44:06.778499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.269 [2024-10-01 13:44:06.781431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.269 [2024-10-01 13:44:06.781565] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.269 [2024-10-01 13:44:06.781608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.269 [2024-10-01 13:44:06.781628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.269 [2024-10-01 13:44:06.781663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.269 [2024-10-01 13:44:06.781696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.269 [2024-10-01 13:44:06.781713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.269 [2024-10-01 13:44:06.781727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.269 [2024-10-01 13:44:06.781759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.269 [2024-10-01 13:44:06.788415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.269 [2024-10-01 13:44:06.788590] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.269 [2024-10-01 13:44:06.788631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.269 [2024-10-01 13:44:06.788651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.269 [2024-10-01 13:44:06.788688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.269 [2024-10-01 13:44:06.788722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.269 [2024-10-01 13:44:06.788741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.269 [2024-10-01 13:44:06.788755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.269 [2024-10-01 13:44:06.788789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.269 [2024-10-01 13:44:06.792618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.269 [2024-10-01 13:44:06.792743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.269 [2024-10-01 13:44:06.792785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.269 [2024-10-01 13:44:06.792837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.269 [2024-10-01 13:44:06.792875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.269 [2024-10-01 13:44:06.792908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.269 [2024-10-01 13:44:06.792926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.269 [2024-10-01 13:44:06.792940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.269 [2024-10-01 13:44:06.792973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.269 [2024-10-01 13:44:06.799218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.269 [2024-10-01 13:44:06.799341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.269 [2024-10-01 13:44:06.799387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.269 [2024-10-01 13:44:06.799408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.269 [2024-10-01 13:44:06.799442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.269 [2024-10-01 13:44:06.799475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.269 [2024-10-01 13:44:06.799493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.269 [2024-10-01 13:44:06.799507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.269 [2024-10-01 13:44:06.799557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.269 [2024-10-01 13:44:06.802777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.269 [2024-10-01 13:44:06.802897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.269 [2024-10-01 13:44:06.802940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.269 [2024-10-01 13:44:06.802960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.269 [2024-10-01 13:44:06.802995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.269 [2024-10-01 13:44:06.803027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.269 [2024-10-01 13:44:06.803045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.269 [2024-10-01 13:44:06.803059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.269 [2024-10-01 13:44:06.803091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.269 [2024-10-01 13:44:06.810021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.269 [2024-10-01 13:44:06.810150] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.269 [2024-10-01 13:44:06.810197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.270 [2024-10-01 13:44:06.810218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.270 [2024-10-01 13:44:06.810253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.270 [2024-10-01 13:44:06.810286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.270 [2024-10-01 13:44:06.810322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.270 [2024-10-01 13:44:06.810338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.270 [2024-10-01 13:44:06.810372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.270 [2024-10-01 13:44:06.813341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.270 [2024-10-01 13:44:06.813462] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.270 [2024-10-01 13:44:06.813506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.270 [2024-10-01 13:44:06.813526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.270 [2024-10-01 13:44:06.813577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.270 [2024-10-01 13:44:06.813612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.270 [2024-10-01 13:44:06.813630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.270 [2024-10-01 13:44:06.813644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.270 [2024-10-01 13:44:06.814744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.270 [2024-10-01 13:44:06.820920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.270 [2024-10-01 13:44:06.821040] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.270 [2024-10-01 13:44:06.821083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.270 [2024-10-01 13:44:06.821103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.270 [2024-10-01 13:44:06.821143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.270 [2024-10-01 13:44:06.821186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.270 [2024-10-01 13:44:06.821205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.270 [2024-10-01 13:44:06.821220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.270 [2024-10-01 13:44:06.821252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.270 [2024-10-01 13:44:06.824152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.270 [2024-10-01 13:44:06.824275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.270 [2024-10-01 13:44:06.824318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.270 [2024-10-01 13:44:06.824339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.270 [2024-10-01 13:44:06.824373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.270 [2024-10-01 13:44:06.824406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.270 [2024-10-01 13:44:06.824424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.270 [2024-10-01 13:44:06.824459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.270 [2024-10-01 13:44:06.824496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.270 [2024-10-01 13:44:06.831015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.270 [2024-10-01 13:44:06.831146] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.270 [2024-10-01 13:44:06.831194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.270 [2024-10-01 13:44:06.831215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.270 [2024-10-01 13:44:06.831251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.270 [2024-10-01 13:44:06.831283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.270 [2024-10-01 13:44:06.831301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.270 [2024-10-01 13:44:06.831316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.270 [2024-10-01 13:44:06.831596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.270 [2024-10-01 13:44:06.835031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.270 [2024-10-01 13:44:06.835161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.270 [2024-10-01 13:44:06.835195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.270 [2024-10-01 13:44:06.835214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.270 [2024-10-01 13:44:06.835249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.270 [2024-10-01 13:44:06.835281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.270 [2024-10-01 13:44:06.835299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.270 [2024-10-01 13:44:06.835314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.270 [2024-10-01 13:44:06.835347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.270 [2024-10-01 13:44:06.841583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.270 [2024-10-01 13:44:06.841704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.270 [2024-10-01 13:44:06.841746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.270 [2024-10-01 13:44:06.841766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.270 [2024-10-01 13:44:06.841801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.270 [2024-10-01 13:44:06.841834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.270 [2024-10-01 13:44:06.841852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.270 [2024-10-01 13:44:06.841866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.270 [2024-10-01 13:44:06.841898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.270 [2024-10-01 13:44:06.845137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.270 [2024-10-01 13:44:06.845277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.270 [2024-10-01 13:44:06.845321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.271 [2024-10-01 13:44:06.845342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.271 [2024-10-01 13:44:06.845409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.271 [2024-10-01 13:44:06.845444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.271 [2024-10-01 13:44:06.845461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.271 [2024-10-01 13:44:06.845476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.271 [2024-10-01 13:44:06.845509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.271 [2024-10-01 13:44:06.852675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.271 [2024-10-01 13:44:06.852858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.271 [2024-10-01 13:44:06.852895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.271 [2024-10-01 13:44:06.852914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.271 [2024-10-01 13:44:06.852952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.271 [2024-10-01 13:44:06.852986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.271 [2024-10-01 13:44:06.853004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.271 [2024-10-01 13:44:06.853020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.271 [2024-10-01 13:44:06.853053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.271 [2024-10-01 13:44:06.856004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.271 [2024-10-01 13:44:06.856130] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.271 [2024-10-01 13:44:06.856182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.271 [2024-10-01 13:44:06.856204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.271 [2024-10-01 13:44:06.856240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.271 [2024-10-01 13:44:06.856272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.271 [2024-10-01 13:44:06.856289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.271 [2024-10-01 13:44:06.856304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.271 [2024-10-01 13:44:06.856336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.271 [2024-10-01 13:44:06.863676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.271 [2024-10-01 13:44:06.863799] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.271 [2024-10-01 13:44:06.863843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.271 [2024-10-01 13:44:06.863864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.271 [2024-10-01 13:44:06.863910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.271 [2024-10-01 13:44:06.863945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.271 [2024-10-01 13:44:06.863964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.271 [2024-10-01 13:44:06.864010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.271 [2024-10-01 13:44:06.864045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.271 [2024-10-01 13:44:06.866913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.271 [2024-10-01 13:44:06.867031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.271 [2024-10-01 13:44:06.867073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.271 [2024-10-01 13:44:06.867094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.271 [2024-10-01 13:44:06.867130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.271 [2024-10-01 13:44:06.867179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.271 [2024-10-01 13:44:06.867199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.271 [2024-10-01 13:44:06.867213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.271 [2024-10-01 13:44:06.867245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.271 [2024-10-01 13:44:06.873793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.271 [2024-10-01 13:44:06.873916] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.271 [2024-10-01 13:44:06.873959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.271 [2024-10-01 13:44:06.873980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.271 [2024-10-01 13:44:06.874014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.271 [2024-10-01 13:44:06.874047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.271 [2024-10-01 13:44:06.874065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.271 [2024-10-01 13:44:06.874080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.271 [2024-10-01 13:44:06.874111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.271 [2024-10-01 13:44:06.877990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.271 [2024-10-01 13:44:06.878131] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.271 [2024-10-01 13:44:06.878183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.271 [2024-10-01 13:44:06.878205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.271 [2024-10-01 13:44:06.878241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.271 [2024-10-01 13:44:06.878275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.271 [2024-10-01 13:44:06.878292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.271 [2024-10-01 13:44:06.878306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.271 [2024-10-01 13:44:06.878339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.271 [2024-10-01 13:44:06.884351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.271 [2024-10-01 13:44:06.884473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.271 [2024-10-01 13:44:06.884548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.271 [2024-10-01 13:44:06.884573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.271 [2024-10-01 13:44:06.884610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.271 [2024-10-01 13:44:06.884643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.271 [2024-10-01 13:44:06.884662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.272 [2024-10-01 13:44:06.884676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.272 [2024-10-01 13:44:06.884708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.272 [2024-10-01 13:44:06.888087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.272 [2024-10-01 13:44:06.888214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.272 [2024-10-01 13:44:06.888259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.272 [2024-10-01 13:44:06.888279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.272 [2024-10-01 13:44:06.888561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.272 [2024-10-01 13:44:06.888725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.272 [2024-10-01 13:44:06.888759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.272 [2024-10-01 13:44:06.888776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.272 [2024-10-01 13:44:06.888887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.272 [2024-10-01 13:44:06.895129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.272 [2024-10-01 13:44:06.895256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.272 [2024-10-01 13:44:06.895300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.272 [2024-10-01 13:44:06.895320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.272 [2024-10-01 13:44:06.895354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.272 [2024-10-01 13:44:06.895387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.272 [2024-10-01 13:44:06.895405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.272 [2024-10-01 13:44:06.895420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.272 [2024-10-01 13:44:06.895452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.272 [2024-10-01 13:44:06.898429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.272 [2024-10-01 13:44:06.898565] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.272 [2024-10-01 13:44:06.898608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.272 [2024-10-01 13:44:06.898628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.272 [2024-10-01 13:44:06.898664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.272 [2024-10-01 13:44:06.898717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.272 [2024-10-01 13:44:06.898737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.272 [2024-10-01 13:44:06.898751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.272 [2024-10-01 13:44:06.899853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.272 [2024-10-01 13:44:06.906049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.272 [2024-10-01 13:44:06.906179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.272 [2024-10-01 13:44:06.906224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.272 [2024-10-01 13:44:06.906245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.272 [2024-10-01 13:44:06.906280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.272 [2024-10-01 13:44:06.906313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.272 [2024-10-01 13:44:06.906332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.272 [2024-10-01 13:44:06.906346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.272 [2024-10-01 13:44:06.906379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.272 [2024-10-01 13:44:06.909327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.272 [2024-10-01 13:44:06.909446] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.272 [2024-10-01 13:44:06.909492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.272 [2024-10-01 13:44:06.909512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.272 [2024-10-01 13:44:06.909562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.272 [2024-10-01 13:44:06.909598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.272 [2024-10-01 13:44:06.909616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.272 [2024-10-01 13:44:06.909631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.272 [2024-10-01 13:44:06.909663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.272 [2024-10-01 13:44:06.916158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.272 [2024-10-01 13:44:06.916278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.272 [2024-10-01 13:44:06.916321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.272 [2024-10-01 13:44:06.916342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.272 [2024-10-01 13:44:06.916376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.272 [2024-10-01 13:44:06.916408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.272 [2024-10-01 13:44:06.916426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.272 [2024-10-01 13:44:06.916441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.272 [2024-10-01 13:44:06.916755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.272 [2024-10-01 13:44:06.920240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.272 [2024-10-01 13:44:06.920360] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.272 [2024-10-01 13:44:06.920400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.272 [2024-10-01 13:44:06.920420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.272 [2024-10-01 13:44:06.920454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.272 [2024-10-01 13:44:06.920486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.272 [2024-10-01 13:44:06.920504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.272 [2024-10-01 13:44:06.920518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.272 [2024-10-01 13:44:06.920566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.272 [2024-10-01 13:44:06.926690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.272 [2024-10-01 13:44:06.926810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.272 [2024-10-01 13:44:06.926854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.272 [2024-10-01 13:44:06.926874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.272 [2024-10-01 13:44:06.926909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.273 [2024-10-01 13:44:06.926941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.273 [2024-10-01 13:44:06.926959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.273 [2024-10-01 13:44:06.926974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.273 [2024-10-01 13:44:06.927005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.273 [2024-10-01 13:44:06.930341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.273 [2024-10-01 13:44:06.930461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.273 [2024-10-01 13:44:06.930501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.273 [2024-10-01 13:44:06.930521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.273 [2024-10-01 13:44:06.930571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.273 [2024-10-01 13:44:06.930607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.273 [2024-10-01 13:44:06.930625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.273 [2024-10-01 13:44:06.930640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.273 [2024-10-01 13:44:06.930912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.273 [2024-10-01 13:44:06.937507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.273 [2024-10-01 13:44:06.937642] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.273 [2024-10-01 13:44:06.937681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.273 [2024-10-01 13:44:06.937719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.273 [2024-10-01 13:44:06.937756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.273 [2024-10-01 13:44:06.937789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.273 [2024-10-01 13:44:06.937807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.273 [2024-10-01 13:44:06.937821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.273 [2024-10-01 13:44:06.937853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.273 [2024-10-01 13:44:06.940814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.273 [2024-10-01 13:44:06.940933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.273 [2024-10-01 13:44:06.940976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.273 [2024-10-01 13:44:06.940996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.273 [2024-10-01 13:44:06.941030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.273 [2024-10-01 13:44:06.941063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.273 [2024-10-01 13:44:06.941081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.273 [2024-10-01 13:44:06.941095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.273 [2024-10-01 13:44:06.941129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.273 [2024-10-01 13:44:06.948400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.273 [2024-10-01 13:44:06.948522] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.273 [2024-10-01 13:44:06.948578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.273 [2024-10-01 13:44:06.948600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.273 [2024-10-01 13:44:06.948635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.273 [2024-10-01 13:44:06.948668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.273 [2024-10-01 13:44:06.948686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.273 [2024-10-01 13:44:06.948701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.273 [2024-10-01 13:44:06.948732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.273 [2024-10-01 13:44:06.951650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.273 [2024-10-01 13:44:06.951767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.273 [2024-10-01 13:44:06.951810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.273 [2024-10-01 13:44:06.951830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.273 [2024-10-01 13:44:06.951864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.273 [2024-10-01 13:44:06.951911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.273 [2024-10-01 13:44:06.951949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.273 [2024-10-01 13:44:06.951964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.273 [2024-10-01 13:44:06.951998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.273 [2024-10-01 13:44:06.958516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.273 [2024-10-01 13:44:06.958654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.273 [2024-10-01 13:44:06.958697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.273 [2024-10-01 13:44:06.958718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.273 [2024-10-01 13:44:06.958758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.273 [2024-10-01 13:44:06.958798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.273 [2024-10-01 13:44:06.958818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.273 [2024-10-01 13:44:06.958832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.273 [2024-10-01 13:44:06.959114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.273 [2024-10-01 13:44:06.962642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.273 [2024-10-01 13:44:06.962759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.273 [2024-10-01 13:44:06.962793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.273 [2024-10-01 13:44:06.962812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.273 [2024-10-01 13:44:06.962846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.273 [2024-10-01 13:44:06.962878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.273 [2024-10-01 13:44:06.962904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.273 [2024-10-01 13:44:06.962918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.273 [2024-10-01 13:44:06.962958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.273 [2024-10-01 13:44:06.969228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.273 [2024-10-01 13:44:06.969346] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.273 [2024-10-01 13:44:06.969380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.273 [2024-10-01 13:44:06.969398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.273 [2024-10-01 13:44:06.969432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.273 [2024-10-01 13:44:06.969464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.273 [2024-10-01 13:44:06.969482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.273 [2024-10-01 13:44:06.969497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.273 [2024-10-01 13:44:06.969529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.273 [2024-10-01 13:44:06.973029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.273 [2024-10-01 13:44:06.973161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.273 [2024-10-01 13:44:06.973197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.273 [2024-10-01 13:44:06.973216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.273 [2024-10-01 13:44:06.973251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.273 [2024-10-01 13:44:06.973284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.273 [2024-10-01 13:44:06.973303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.273 [2024-10-01 13:44:06.973317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.273 [2024-10-01 13:44:06.973350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.273 [2024-10-01 13:44:06.979625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.273 [2024-10-01 13:44:06.979747] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.273 [2024-10-01 13:44:06.979781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.273 [2024-10-01 13:44:06.979799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.273 [2024-10-01 13:44:06.979833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.273 [2024-10-01 13:44:06.979866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.273 [2024-10-01 13:44:06.979898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.274 [2024-10-01 13:44:06.979915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.274 [2024-10-01 13:44:06.979948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.274 [2024-10-01 13:44:06.983135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.274 [2024-10-01 13:44:06.983263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.274 [2024-10-01 13:44:06.983309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.274 [2024-10-01 13:44:06.983329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.274 [2024-10-01 13:44:06.983364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.274 [2024-10-01 13:44:06.983396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.274 [2024-10-01 13:44:06.983414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.274 [2024-10-01 13:44:06.983428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.274 [2024-10-01 13:44:06.983461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.274 [2024-10-01 13:44:06.989725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.274 [2024-10-01 13:44:06.989845] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.274 [2024-10-01 13:44:06.989879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.274 [2024-10-01 13:44:06.989897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.274 [2024-10-01 13:44:06.989951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.274 [2024-10-01 13:44:06.989985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.274 [2024-10-01 13:44:06.990003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.274 [2024-10-01 13:44:06.990017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.274 [2024-10-01 13:44:06.990290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.274 [2024-10-01 13:44:06.993813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.274 [2024-10-01 13:44:06.993932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.274 [2024-10-01 13:44:06.993971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.274 [2024-10-01 13:44:06.993992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.274 [2024-10-01 13:44:06.994027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.274 [2024-10-01 13:44:06.994060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.274 [2024-10-01 13:44:06.994078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.274 [2024-10-01 13:44:06.994093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.274 [2024-10-01 13:44:06.994127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.274 [2024-10-01 13:44:07.000296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.274 [2024-10-01 13:44:07.000417] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.274 [2024-10-01 13:44:07.000450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.274 [2024-10-01 13:44:07.000469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.274 [2024-10-01 13:44:07.000502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.274 [2024-10-01 13:44:07.000549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.274 [2024-10-01 13:44:07.000571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.274 [2024-10-01 13:44:07.000586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.274 [2024-10-01 13:44:07.000619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.274 [2024-10-01 13:44:07.003922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.274 [2024-10-01 13:44:07.004053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.274 [2024-10-01 13:44:07.004087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.274 [2024-10-01 13:44:07.004106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.274 [2024-10-01 13:44:07.004145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.274 [2024-10-01 13:44:07.004187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.274 [2024-10-01 13:44:07.004206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.274 [2024-10-01 13:44:07.004236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.274 [2024-10-01 13:44:07.004511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.274 [2024-10-01 13:44:07.011334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.274 [2024-10-01 13:44:07.011458] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.274 [2024-10-01 13:44:07.011493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.274 [2024-10-01 13:44:07.011512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.274 [2024-10-01 13:44:07.011562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.274 [2024-10-01 13:44:07.011599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.274 [2024-10-01 13:44:07.011618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.274 [2024-10-01 13:44:07.011632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.274 [2024-10-01 13:44:07.011664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.274 [2024-10-01 13:44:07.014613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.274 [2024-10-01 13:44:07.014748] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.274 [2024-10-01 13:44:07.014793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.274 [2024-10-01 13:44:07.014818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.274 [2024-10-01 13:44:07.014854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.274 [2024-10-01 13:44:07.016252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.274 [2024-10-01 13:44:07.016304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.274 [2024-10-01 13:44:07.016328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.274 [2024-10-01 13:44:07.016640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.274 [2024-10-01 13:44:07.022368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.274 [2024-10-01 13:44:07.022497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.274 [2024-10-01 13:44:07.022532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.274 [2024-10-01 13:44:07.022574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.274 [2024-10-01 13:44:07.022610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.274 [2024-10-01 13:44:07.022651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.274 [2024-10-01 13:44:07.022671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.274 [2024-10-01 13:44:07.022685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.274 [2024-10-01 13:44:07.022718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.274 [2024-10-01 13:44:07.025563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.274 [2024-10-01 13:44:07.025738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.274 [2024-10-01 13:44:07.025809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.274 [2024-10-01 13:44:07.025833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.274 [2024-10-01 13:44:07.025873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.274 [2024-10-01 13:44:07.025909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.274 [2024-10-01 13:44:07.025927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.274 [2024-10-01 13:44:07.025941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.274 [2024-10-01 13:44:07.025975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.274 [2024-10-01 13:44:07.032468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.274 [2024-10-01 13:44:07.032607] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.274 [2024-10-01 13:44:07.032643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.274 [2024-10-01 13:44:07.032662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.275 [2024-10-01 13:44:07.032927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.275 [2024-10-01 13:44:07.033094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.275 [2024-10-01 13:44:07.033131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.275 [2024-10-01 13:44:07.033161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.275 [2024-10-01 13:44:07.033277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.275 [2024-10-01 13:44:07.036310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.275 [2024-10-01 13:44:07.036430] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.275 [2024-10-01 13:44:07.036469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.275 [2024-10-01 13:44:07.036489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.275 [2024-10-01 13:44:07.036524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.275 [2024-10-01 13:44:07.036575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.275 [2024-10-01 13:44:07.036594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.275 [2024-10-01 13:44:07.036608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.275 [2024-10-01 13:44:07.036642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.275 [2024-10-01 13:44:07.042795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.275 [2024-10-01 13:44:07.042917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.275 [2024-10-01 13:44:07.042951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.275 [2024-10-01 13:44:07.042970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.275 [2024-10-01 13:44:07.043004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.275 [2024-10-01 13:44:07.043057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.275 [2024-10-01 13:44:07.043077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.275 [2024-10-01 13:44:07.043091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.275 [2024-10-01 13:44:07.043125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.275 [2024-10-01 13:44:07.046408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.275 [2024-10-01 13:44:07.046529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.275 [2024-10-01 13:44:07.046582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.275 [2024-10-01 13:44:07.046602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.275 [2024-10-01 13:44:07.046637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.275 [2024-10-01 13:44:07.046669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.275 [2024-10-01 13:44:07.046687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.275 [2024-10-01 13:44:07.046702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.275 [2024-10-01 13:44:07.046965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.275 [2024-10-01 13:44:07.053615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.275 [2024-10-01 13:44:07.053735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.275 [2024-10-01 13:44:07.053779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.275 [2024-10-01 13:44:07.053800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.275 [2024-10-01 13:44:07.053834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.275 [2024-10-01 13:44:07.053866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.275 [2024-10-01 13:44:07.053884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.275 [2024-10-01 13:44:07.053898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.275 [2024-10-01 13:44:07.053930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.275 [2024-10-01 13:44:07.056933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.275 [2024-10-01 13:44:07.057056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.275 [2024-10-01 13:44:07.057089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.275 [2024-10-01 13:44:07.057108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.275 [2024-10-01 13:44:07.057150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.275 [2024-10-01 13:44:07.057192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.275 [2024-10-01 13:44:07.057210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.275 [2024-10-01 13:44:07.057225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.275 [2024-10-01 13:44:07.057277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.275 [2024-10-01 13:44:07.064560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.275 [2024-10-01 13:44:07.064680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.275 [2024-10-01 13:44:07.064720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.275 [2024-10-01 13:44:07.064740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.275 [2024-10-01 13:44:07.064774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.275 [2024-10-01 13:44:07.064806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.275 [2024-10-01 13:44:07.064824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.275 [2024-10-01 13:44:07.064839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.275 [2024-10-01 13:44:07.064871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.275 [2024-10-01 13:44:07.067751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.275 [2024-10-01 13:44:07.067869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.275 [2024-10-01 13:44:07.067919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.275 [2024-10-01 13:44:07.067940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.275 [2024-10-01 13:44:07.067975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.275 [2024-10-01 13:44:07.068007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.275 [2024-10-01 13:44:07.068024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.275 [2024-10-01 13:44:07.068039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.275 [2024-10-01 13:44:07.068071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.275 [2024-10-01 13:44:07.074662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.275 [2024-10-01 13:44:07.074782] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.275 [2024-10-01 13:44:07.074815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.275 [2024-10-01 13:44:07.074834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.275 [2024-10-01 13:44:07.074867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.275 [2024-10-01 13:44:07.074900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.275 [2024-10-01 13:44:07.074918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.275 [2024-10-01 13:44:07.074932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.275 [2024-10-01 13:44:07.075201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.275 [2024-10-01 13:44:07.078639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.275 [2024-10-01 13:44:07.078758] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.275 [2024-10-01 13:44:07.078800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.275 [2024-10-01 13:44:07.078838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.275 [2024-10-01 13:44:07.078876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.275 [2024-10-01 13:44:07.078910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.275 [2024-10-01 13:44:07.078928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.275 [2024-10-01 13:44:07.078942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.275 [2024-10-01 13:44:07.078975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.275 [2024-10-01 13:44:07.085129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.275 [2024-10-01 13:44:07.085258] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.275 [2024-10-01 13:44:07.085293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.275 [2024-10-01 13:44:07.085312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.275 [2024-10-01 13:44:07.085345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.275 [2024-10-01 13:44:07.085384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.275 [2024-10-01 13:44:07.085401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.275 [2024-10-01 13:44:07.085416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.275 [2024-10-01 13:44:07.085449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.275 [2024-10-01 13:44:07.088737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.275 [2024-10-01 13:44:07.088857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.275 [2024-10-01 13:44:07.088896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.275 [2024-10-01 13:44:07.088916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.275 [2024-10-01 13:44:07.088950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.275 [2024-10-01 13:44:07.088983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.275 [2024-10-01 13:44:07.089001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.275 [2024-10-01 13:44:07.089015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.275 [2024-10-01 13:44:07.089287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.275 [2024-10-01 13:44:07.095939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.275 [2024-10-01 13:44:07.096058] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.275 [2024-10-01 13:44:07.096093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.275 [2024-10-01 13:44:07.096112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.276 [2024-10-01 13:44:07.096162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.276 [2024-10-01 13:44:07.096200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.276 [2024-10-01 13:44:07.096235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.276 [2024-10-01 13:44:07.096251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.276 [2024-10-01 13:44:07.096285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.276 [2024-10-01 13:44:07.099230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.276 [2024-10-01 13:44:07.099350] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.276 [2024-10-01 13:44:07.099384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.276 [2024-10-01 13:44:07.099403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.276 [2024-10-01 13:44:07.099437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.276 [2024-10-01 13:44:07.099469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.276 [2024-10-01 13:44:07.099487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.276 [2024-10-01 13:44:07.099502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.276 [2024-10-01 13:44:07.099548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.276 [2024-10-01 13:44:07.106868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.276 [2024-10-01 13:44:07.106990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.276 [2024-10-01 13:44:07.107029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.276 [2024-10-01 13:44:07.107049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.276 [2024-10-01 13:44:07.107083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.276 [2024-10-01 13:44:07.107115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.276 [2024-10-01 13:44:07.107143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.276 [2024-10-01 13:44:07.107167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.276 [2024-10-01 13:44:07.107203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.276 [2024-10-01 13:44:07.110054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.276 [2024-10-01 13:44:07.110182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.276 [2024-10-01 13:44:07.110226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.276 [2024-10-01 13:44:07.110246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.276 [2024-10-01 13:44:07.110281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.276 [2024-10-01 13:44:07.110314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.276 [2024-10-01 13:44:07.110331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.276 [2024-10-01 13:44:07.110346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.276 [2024-10-01 13:44:07.110378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.276 [2024-10-01 13:44:07.116964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.276 [2024-10-01 13:44:07.117086] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.276 [2024-10-01 13:44:07.117122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.276 [2024-10-01 13:44:07.117153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.276 [2024-10-01 13:44:07.117197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.276 [2024-10-01 13:44:07.117230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.276 [2024-10-01 13:44:07.117248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.276 [2024-10-01 13:44:07.117263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.276 [2024-10-01 13:44:07.117295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.276 [2024-10-01 13:44:07.121015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.276 [2024-10-01 13:44:07.121141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.276 [2024-10-01 13:44:07.121194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.276 [2024-10-01 13:44:07.121216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.276 [2024-10-01 13:44:07.121251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.276 [2024-10-01 13:44:07.121285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.276 [2024-10-01 13:44:07.121303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.276 [2024-10-01 13:44:07.121318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.276 [2024-10-01 13:44:07.121351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.276 [2024-10-01 13:44:07.127495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.276 [2024-10-01 13:44:07.127627] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.276 [2024-10-01 13:44:07.127673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.276 [2024-10-01 13:44:07.127694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.276 [2024-10-01 13:44:07.127729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.276 [2024-10-01 13:44:07.127761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.276 [2024-10-01 13:44:07.127779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.276 [2024-10-01 13:44:07.127793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.276 [2024-10-01 13:44:07.127825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.276 [2024-10-01 13:44:07.131114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.276 [2024-10-01 13:44:07.131245] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.276 [2024-10-01 13:44:07.131281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.276 [2024-10-01 13:44:07.131299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.276 [2024-10-01 13:44:07.131353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.276 [2024-10-01 13:44:07.131387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.276 [2024-10-01 13:44:07.131405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.276 [2024-10-01 13:44:07.131419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.276 [2024-10-01 13:44:07.131699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.276 [2024-10-01 13:44:07.138300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.276 [2024-10-01 13:44:07.138418] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.276 [2024-10-01 13:44:07.138451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.276 [2024-10-01 13:44:07.138470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.276 [2024-10-01 13:44:07.138503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.276 [2024-10-01 13:44:07.138551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.276 [2024-10-01 13:44:07.138573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.276 [2024-10-01 13:44:07.138588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.276 [2024-10-01 13:44:07.138621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.276 [2024-10-01 13:44:07.141602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.276 [2024-10-01 13:44:07.141721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.276 [2024-10-01 13:44:07.141754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.276 [2024-10-01 13:44:07.141772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.276 [2024-10-01 13:44:07.141806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.276 [2024-10-01 13:44:07.141839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.276 [2024-10-01 13:44:07.141856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.276 [2024-10-01 13:44:07.141870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.276 [2024-10-01 13:44:07.141903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.276 [2024-10-01 13:44:07.149198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.276 [2024-10-01 13:44:07.149320] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.276 [2024-10-01 13:44:07.149364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.276 [2024-10-01 13:44:07.149385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.276 [2024-10-01 13:44:07.149419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.276 [2024-10-01 13:44:07.149452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.276 [2024-10-01 13:44:07.149470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.276 [2024-10-01 13:44:07.149502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.276 [2024-10-01 13:44:07.149552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.276 [2024-10-01 13:44:07.152356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.276 [2024-10-01 13:44:07.152477] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.276 [2024-10-01 13:44:07.152517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.276 [2024-10-01 13:44:07.152550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.276 [2024-10-01 13:44:07.152588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.276 [2024-10-01 13:44:07.152622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.276 [2024-10-01 13:44:07.152640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.276 [2024-10-01 13:44:07.152655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.276 [2024-10-01 13:44:07.152687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.276 [2024-10-01 13:44:07.159299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.276 [2024-10-01 13:44:07.159420] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.276 [2024-10-01 13:44:07.159463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.276 [2024-10-01 13:44:07.159484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.276 [2024-10-01 13:44:07.159518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.276 [2024-10-01 13:44:07.159565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.276 [2024-10-01 13:44:07.159586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.276 [2024-10-01 13:44:07.159601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.276 [2024-10-01 13:44:07.159863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.276 [2024-10-01 13:44:07.163292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.277 [2024-10-01 13:44:07.163412] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.277 [2024-10-01 13:44:07.163446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.277 [2024-10-01 13:44:07.163465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.277 [2024-10-01 13:44:07.163498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.277 [2024-10-01 13:44:07.163530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.277 [2024-10-01 13:44:07.163564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.277 [2024-10-01 13:44:07.163579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.277 [2024-10-01 13:44:07.163613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.277 [2024-10-01 13:44:07.169779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.277 [2024-10-01 13:44:07.169917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.277 [2024-10-01 13:44:07.169961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.277 [2024-10-01 13:44:07.169981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.277 [2024-10-01 13:44:07.170016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.277 [2024-10-01 13:44:07.170048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.277 [2024-10-01 13:44:07.170067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.277 [2024-10-01 13:44:07.170081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.277 [2024-10-01 13:44:07.170114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.277 [2024-10-01 13:44:07.173390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.277 [2024-10-01 13:44:07.173510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.277 [2024-10-01 13:44:07.173560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.277 [2024-10-01 13:44:07.173582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.277 [2024-10-01 13:44:07.173617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.277 [2024-10-01 13:44:07.173649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.277 [2024-10-01 13:44:07.173667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.277 [2024-10-01 13:44:07.173681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.277 [2024-10-01 13:44:07.173943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.277 [2024-10-01 13:44:07.180633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.277 [2024-10-01 13:44:07.180752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.277 [2024-10-01 13:44:07.180785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.277 [2024-10-01 13:44:07.180804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.277 [2024-10-01 13:44:07.180837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.277 [2024-10-01 13:44:07.180869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.277 [2024-10-01 13:44:07.180887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.277 [2024-10-01 13:44:07.180901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.277 [2024-10-01 13:44:07.180934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.277 [2024-10-01 13:44:07.183894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.277 [2024-10-01 13:44:07.184015] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.277 [2024-10-01 13:44:07.184050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.277 [2024-10-01 13:44:07.184069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.277 [2024-10-01 13:44:07.184103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.277 [2024-10-01 13:44:07.184175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.277 [2024-10-01 13:44:07.184198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.277 [2024-10-01 13:44:07.184213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.277 [2024-10-01 13:44:07.184246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.277 [2024-10-01 13:44:07.191481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.277 [2024-10-01 13:44:07.191617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.277 [2024-10-01 13:44:07.191661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.277 [2024-10-01 13:44:07.191682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.277 [2024-10-01 13:44:07.191716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.277 [2024-10-01 13:44:07.191749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.277 [2024-10-01 13:44:07.191767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.277 [2024-10-01 13:44:07.191781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.277 [2024-10-01 13:44:07.191813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.277 [2024-10-01 13:44:07.194719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.277 [2024-10-01 13:44:07.194838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.277 [2024-10-01 13:44:07.194876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.277 [2024-10-01 13:44:07.194896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.277 [2024-10-01 13:44:07.194930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.277 [2024-10-01 13:44:07.194963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.277 [2024-10-01 13:44:07.194980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.277 [2024-10-01 13:44:07.194995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.277 [2024-10-01 13:44:07.195027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.277 [2024-10-01 13:44:07.201597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.277 [2024-10-01 13:44:07.201717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.277 [2024-10-01 13:44:07.201751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.277 [2024-10-01 13:44:07.201770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.277 [2024-10-01 13:44:07.201804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.277 [2024-10-01 13:44:07.201837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.277 [2024-10-01 13:44:07.201855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.277 [2024-10-01 13:44:07.201870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.277 [2024-10-01 13:44:07.202156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.277 [2024-10-01 13:44:07.205606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.277 [2024-10-01 13:44:07.205726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.277 [2024-10-01 13:44:07.205762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.277 [2024-10-01 13:44:07.205780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.277 [2024-10-01 13:44:07.205814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.277 [2024-10-01 13:44:07.205847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.277 [2024-10-01 13:44:07.205864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.277 [2024-10-01 13:44:07.205879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.277 [2024-10-01 13:44:07.205911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.277 [2024-10-01 13:44:07.212064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.277 [2024-10-01 13:44:07.212197] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.277 [2024-10-01 13:44:07.212233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.277 [2024-10-01 13:44:07.212253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.277 [2024-10-01 13:44:07.212287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.277 [2024-10-01 13:44:07.212320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.277 [2024-10-01 13:44:07.212338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.277 [2024-10-01 13:44:07.212352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.277 [2024-10-01 13:44:07.212384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.277 [2024-10-01 13:44:07.215702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.277 [2024-10-01 13:44:07.215819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.277 [2024-10-01 13:44:07.215862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.277 [2024-10-01 13:44:07.215895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.277 [2024-10-01 13:44:07.215933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.277 [2024-10-01 13:44:07.215966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.277 [2024-10-01 13:44:07.215984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.277 [2024-10-01 13:44:07.215998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.277 [2024-10-01 13:44:07.216270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.277 [2024-10-01 13:44:07.222896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.277 [2024-10-01 13:44:07.223024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.277 [2024-10-01 13:44:07.223073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.277 [2024-10-01 13:44:07.223111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.277 [2024-10-01 13:44:07.223160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.277 [2024-10-01 13:44:07.223198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.277 [2024-10-01 13:44:07.223217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.277 [2024-10-01 13:44:07.223231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.277 [2024-10-01 13:44:07.223264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.277 [2024-10-01 13:44:07.226208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.277 [2024-10-01 13:44:07.226329] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.277 [2024-10-01 13:44:07.226371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.277 [2024-10-01 13:44:07.226392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.277 [2024-10-01 13:44:07.226426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.277 [2024-10-01 13:44:07.226459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.277 [2024-10-01 13:44:07.226477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.278 [2024-10-01 13:44:07.226491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.278 [2024-10-01 13:44:07.226523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.278 [2024-10-01 13:44:07.233823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.278 [2024-10-01 13:44:07.233944] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.278 [2024-10-01 13:44:07.233987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.278 [2024-10-01 13:44:07.234008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.278 [2024-10-01 13:44:07.234042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.278 [2024-10-01 13:44:07.234075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.278 [2024-10-01 13:44:07.234093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.278 [2024-10-01 13:44:07.234108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.278 [2024-10-01 13:44:07.234148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.278 [2024-10-01 13:44:07.237022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.278 [2024-10-01 13:44:07.237148] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.278 [2024-10-01 13:44:07.237193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.278 [2024-10-01 13:44:07.237214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.278 [2024-10-01 13:44:07.237249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.278 [2024-10-01 13:44:07.237282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.278 [2024-10-01 13:44:07.237318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.278 [2024-10-01 13:44:07.237334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.278 [2024-10-01 13:44:07.237368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.278 [2024-10-01 13:44:07.244129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.278 [2024-10-01 13:44:07.244547] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.278 [2024-10-01 13:44:07.244596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.278 [2024-10-01 13:44:07.244627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.278 [2024-10-01 13:44:07.244787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.278 [2024-10-01 13:44:07.244921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.278 [2024-10-01 13:44:07.244952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.278 [2024-10-01 13:44:07.244969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.278 [2024-10-01 13:44:07.245034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.278 [2024-10-01 13:44:07.248089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.278 [2024-10-01 13:44:07.248244] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.278 [2024-10-01 13:44:07.248281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.278 [2024-10-01 13:44:07.248308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.278 [2024-10-01 13:44:07.248344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.278 [2024-10-01 13:44:07.248380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.278 [2024-10-01 13:44:07.248399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.278 [2024-10-01 13:44:07.248413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.278 [2024-10-01 13:44:07.248446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.278 [2024-10-01 13:44:07.254425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.278 [2024-10-01 13:44:07.254564] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.278 [2024-10-01 13:44:07.254599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.278 [2024-10-01 13:44:07.254618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.278 [2024-10-01 13:44:07.254653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.278 [2024-10-01 13:44:07.254687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.278 [2024-10-01 13:44:07.254705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.278 [2024-10-01 13:44:07.254719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.278 [2024-10-01 13:44:07.255819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.278 [2024-10-01 13:44:07.258202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.278 [2024-10-01 13:44:07.258324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.278 [2024-10-01 13:44:07.258358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.278 [2024-10-01 13:44:07.258378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.278 [2024-10-01 13:44:07.258659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.278 [2024-10-01 13:44:07.258849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.278 [2024-10-01 13:44:07.258884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.278 [2024-10-01 13:44:07.258901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.278 [2024-10-01 13:44:07.259014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.278 [2024-10-01 13:44:07.265198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.278 [2024-10-01 13:44:07.265320] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.278 [2024-10-01 13:44:07.265364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.278 [2024-10-01 13:44:07.265384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.278 [2024-10-01 13:44:07.265420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.278 [2024-10-01 13:44:07.265453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.278 [2024-10-01 13:44:07.265470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.278 [2024-10-01 13:44:07.265485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.278 [2024-10-01 13:44:07.265516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.278 [2024-10-01 13:44:07.268498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.278 [2024-10-01 13:44:07.268628] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.278 [2024-10-01 13:44:07.268673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.278 [2024-10-01 13:44:07.268693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.278 [2024-10-01 13:44:07.268727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.278 [2024-10-01 13:44:07.268759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.278 [2024-10-01 13:44:07.268777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.278 [2024-10-01 13:44:07.268791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.278 [2024-10-01 13:44:07.268823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.278 [2024-10-01 13:44:07.276177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.278 [2024-10-01 13:44:07.276300] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.278 [2024-10-01 13:44:07.276344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.278 [2024-10-01 13:44:07.276364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.278 [2024-10-01 13:44:07.276418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.278 [2024-10-01 13:44:07.276476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.278 [2024-10-01 13:44:07.276496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.278 [2024-10-01 13:44:07.276511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.278 [2024-10-01 13:44:07.276558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.278 [2024-10-01 13:44:07.279289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.278 [2024-10-01 13:44:07.279412] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.278 [2024-10-01 13:44:07.279446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.278 [2024-10-01 13:44:07.279465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.278 [2024-10-01 13:44:07.279499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.278 [2024-10-01 13:44:07.279532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.278 [2024-10-01 13:44:07.279568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.278 [2024-10-01 13:44:07.279583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.278 [2024-10-01 13:44:07.279617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.278 [2024-10-01 13:44:07.286279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.278 [2024-10-01 13:44:07.286401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.278 [2024-10-01 13:44:07.286441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.278 [2024-10-01 13:44:07.286461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.278 [2024-10-01 13:44:07.286743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.278 [2024-10-01 13:44:07.286909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.278 [2024-10-01 13:44:07.286944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.278 [2024-10-01 13:44:07.286962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.278 [2024-10-01 13:44:07.287078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.278 [2024-10-01 13:44:07.290182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.278 [2024-10-01 13:44:07.290303] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.278 [2024-10-01 13:44:07.290345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.279 [2024-10-01 13:44:07.290366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.279 [2024-10-01 13:44:07.290399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.279 [2024-10-01 13:44:07.290432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.279 [2024-10-01 13:44:07.290450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.279 [2024-10-01 13:44:07.290483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.279 [2024-10-01 13:44:07.290518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.279 [2024-10-01 13:44:07.296695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.279 [2024-10-01 13:44:07.296818] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.279 [2024-10-01 13:44:07.296878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.279 [2024-10-01 13:44:07.296901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.279 [2024-10-01 13:44:07.296936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.279 [2024-10-01 13:44:07.296969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.279 [2024-10-01 13:44:07.296987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.279 [2024-10-01 13:44:07.297022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.279 [2024-10-01 13:44:07.298186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.279 [2024-10-01 13:44:07.300311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.279 [2024-10-01 13:44:07.300431] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.279 [2024-10-01 13:44:07.300489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.279 [2024-10-01 13:44:07.300512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.279 [2024-10-01 13:44:07.300562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.279 [2024-10-01 13:44:07.300829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.279 [2024-10-01 13:44:07.300866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.279 [2024-10-01 13:44:07.300883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.279 [2024-10-01 13:44:07.301030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.279 [2024-10-01 13:44:07.307437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.279 [2024-10-01 13:44:07.307570] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.279 [2024-10-01 13:44:07.307606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.279 [2024-10-01 13:44:07.307625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.279 [2024-10-01 13:44:07.307659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.279 [2024-10-01 13:44:07.307692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.279 [2024-10-01 13:44:07.307710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.279 [2024-10-01 13:44:07.307724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.279 [2024-10-01 13:44:07.307756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.279 [2024-10-01 13:44:07.310746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.279 [2024-10-01 13:44:07.310887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.279 [2024-10-01 13:44:07.310926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.279 [2024-10-01 13:44:07.310946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.279 [2024-10-01 13:44:07.310980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.279 [2024-10-01 13:44:07.311012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.279 [2024-10-01 13:44:07.311030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.279 [2024-10-01 13:44:07.311044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.279 [2024-10-01 13:44:07.311077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.279 [2024-10-01 13:44:07.318321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.279 [2024-10-01 13:44:07.318442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.279 [2024-10-01 13:44:07.318477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.279 [2024-10-01 13:44:07.318496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.279 [2024-10-01 13:44:07.318530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.279 [2024-10-01 13:44:07.318581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.279 [2024-10-01 13:44:07.318600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.279 [2024-10-01 13:44:07.318614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.279 [2024-10-01 13:44:07.318646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.279 [2024-10-01 13:44:07.321574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.279 [2024-10-01 13:44:07.321692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.279 [2024-10-01 13:44:07.321734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.279 [2024-10-01 13:44:07.321755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.279 [2024-10-01 13:44:07.321790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.279 [2024-10-01 13:44:07.321823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.279 [2024-10-01 13:44:07.321840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.279 [2024-10-01 13:44:07.321854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.279 [2024-10-01 13:44:07.321886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.279 [2024-10-01 13:44:07.328419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.279 [2024-10-01 13:44:07.328553] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.279 [2024-10-01 13:44:07.328587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.279 [2024-10-01 13:44:07.328605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.279 [2024-10-01 13:44:07.328639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.279 [2024-10-01 13:44:07.328689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.279 [2024-10-01 13:44:07.328709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.279 [2024-10-01 13:44:07.328724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.279 [2024-10-01 13:44:07.328986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.279 [2024-10-01 13:44:07.332477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.279 [2024-10-01 13:44:07.332608] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.279 [2024-10-01 13:44:07.332643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.279 [2024-10-01 13:44:07.332661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.279 [2024-10-01 13:44:07.332694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.279 [2024-10-01 13:44:07.332727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.279 [2024-10-01 13:44:07.332744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.279 [2024-10-01 13:44:07.332759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.279 [2024-10-01 13:44:07.332791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.279 [2024-10-01 13:44:07.338937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.279 [2024-10-01 13:44:07.339066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.279 [2024-10-01 13:44:07.339105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.279 [2024-10-01 13:44:07.339126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.279 [2024-10-01 13:44:07.339175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.279 [2024-10-01 13:44:07.339210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.279 [2024-10-01 13:44:07.339229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.279 [2024-10-01 13:44:07.339243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.279 [2024-10-01 13:44:07.339275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.279 [2024-10-01 13:44:07.342587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.279 [2024-10-01 13:44:07.342706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.279 [2024-10-01 13:44:07.342749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.279 [2024-10-01 13:44:07.342769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.279 [2024-10-01 13:44:07.342804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.279 [2024-10-01 13:44:07.342836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.279 [2024-10-01 13:44:07.342854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.279 [2024-10-01 13:44:07.342869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.279 [2024-10-01 13:44:07.343158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.279 [2024-10-01 13:44:07.349768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.279 [2024-10-01 13:44:07.349890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.279 [2024-10-01 13:44:07.349929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.279 [2024-10-01 13:44:07.349948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.279 [2024-10-01 13:44:07.349982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.279 [2024-10-01 13:44:07.350014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.279 [2024-10-01 13:44:07.350032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.279 [2024-10-01 13:44:07.350046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.279 [2024-10-01 13:44:07.350078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.279 [2024-10-01 13:44:07.353077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.279 [2024-10-01 13:44:07.353205] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.279 [2024-10-01 13:44:07.353239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.279 [2024-10-01 13:44:07.353258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.279 [2024-10-01 13:44:07.353292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.279 [2024-10-01 13:44:07.353324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.279 [2024-10-01 13:44:07.353341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.279 [2024-10-01 13:44:07.353355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.279 [2024-10-01 13:44:07.353388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.280 [2024-10-01 13:44:07.360690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.280 [2024-10-01 13:44:07.360811] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.280 [2024-10-01 13:44:07.360845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.280 [2024-10-01 13:44:07.360863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.280 [2024-10-01 13:44:07.360897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.280 [2024-10-01 13:44:07.360929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.280 [2024-10-01 13:44:07.360947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.280 [2024-10-01 13:44:07.360962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.280 [2024-10-01 13:44:07.360994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.280 [2024-10-01 13:44:07.363932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.280 [2024-10-01 13:44:07.364051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.280 [2024-10-01 13:44:07.364095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.280 [2024-10-01 13:44:07.364143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.280 [2024-10-01 13:44:07.364190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.280 [2024-10-01 13:44:07.364224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.280 [2024-10-01 13:44:07.364242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.280 [2024-10-01 13:44:07.364256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.280 [2024-10-01 13:44:07.364289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.280 [2024-10-01 13:44:07.370785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.280 [2024-10-01 13:44:07.370906] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.280 [2024-10-01 13:44:07.370949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.280 [2024-10-01 13:44:07.370970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.280 [2024-10-01 13:44:07.371005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.280 [2024-10-01 13:44:07.371037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.280 [2024-10-01 13:44:07.371056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.280 [2024-10-01 13:44:07.371070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.280 [2024-10-01 13:44:07.371101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.280 [2024-10-01 13:44:07.374892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.280 [2024-10-01 13:44:07.375011] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.280 [2024-10-01 13:44:07.375059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.280 [2024-10-01 13:44:07.375079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.280 [2024-10-01 13:44:07.375113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.280 [2024-10-01 13:44:07.375160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.280 [2024-10-01 13:44:07.375182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.280 [2024-10-01 13:44:07.375197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.280 [2024-10-01 13:44:07.375231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.280 [2024-10-01 13:44:07.381444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.280 [2024-10-01 13:44:07.381580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.280 [2024-10-01 13:44:07.381631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.280 [2024-10-01 13:44:07.381652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.280 [2024-10-01 13:44:07.381694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.280 [2024-10-01 13:44:07.381726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.280 [2024-10-01 13:44:07.381764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.280 [2024-10-01 13:44:07.381780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.280 [2024-10-01 13:44:07.381814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.280 [2024-10-01 13:44:07.384988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.280 [2024-10-01 13:44:07.385107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.280 [2024-10-01 13:44:07.385159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.280 [2024-10-01 13:44:07.385182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.280 [2024-10-01 13:44:07.385218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.280 [2024-10-01 13:44:07.385251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.280 [2024-10-01 13:44:07.385269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.280 [2024-10-01 13:44:07.385283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.280 [2024-10-01 13:44:07.385317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.280 [2024-10-01 13:44:07.392368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.280 [2024-10-01 13:44:07.392487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.280 [2024-10-01 13:44:07.392527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.280 [2024-10-01 13:44:07.392563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.280 [2024-10-01 13:44:07.392600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.280 [2024-10-01 13:44:07.392633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.280 [2024-10-01 13:44:07.392651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.280 [2024-10-01 13:44:07.392665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.280 [2024-10-01 13:44:07.392698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.280 [2024-10-01 13:44:07.395639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.280 [2024-10-01 13:44:07.395756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.280 [2024-10-01 13:44:07.395798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.280 [2024-10-01 13:44:07.395818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.280 [2024-10-01 13:44:07.395852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.280 [2024-10-01 13:44:07.395900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.280 [2024-10-01 13:44:07.395921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.280 [2024-10-01 13:44:07.395936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.280 [2024-10-01 13:44:07.395969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.280 [2024-10-01 13:44:07.403302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.280 [2024-10-01 13:44:07.403423] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.280 [2024-10-01 13:44:07.403457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.280 [2024-10-01 13:44:07.403476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.280 [2024-10-01 13:44:07.403510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.280 [2024-10-01 13:44:07.403560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.280 [2024-10-01 13:44:07.403582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.280 [2024-10-01 13:44:07.403597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.280 [2024-10-01 13:44:07.403629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.280 [2024-10-01 13:44:07.406509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.280 [2024-10-01 13:44:07.406640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.280 [2024-10-01 13:44:07.406691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.280 [2024-10-01 13:44:07.406712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.280 [2024-10-01 13:44:07.406745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.280 [2024-10-01 13:44:07.406778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.280 [2024-10-01 13:44:07.406797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.280 [2024-10-01 13:44:07.406811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.280 [2024-10-01 13:44:07.406843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.280 [2024-10-01 13:44:07.413401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.280 [2024-10-01 13:44:07.413521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.280 [2024-10-01 13:44:07.413577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.280 [2024-10-01 13:44:07.413599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.280 [2024-10-01 13:44:07.413634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.280 [2024-10-01 13:44:07.413667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.280 [2024-10-01 13:44:07.413685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.280 [2024-10-01 13:44:07.413699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.280 [2024-10-01 13:44:07.413731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.280 [2024-10-01 13:44:07.417449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.280 [2024-10-01 13:44:07.417582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.280 [2024-10-01 13:44:07.417624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.280 [2024-10-01 13:44:07.417645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.280 [2024-10-01 13:44:07.417700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.280 [2024-10-01 13:44:07.417734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.280 [2024-10-01 13:44:07.417752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.280 [2024-10-01 13:44:07.417766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.280 [2024-10-01 13:44:07.417799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.280 [2024-10-01 13:44:07.424004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.280 [2024-10-01 13:44:07.424137] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.280 [2024-10-01 13:44:07.424188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.280 [2024-10-01 13:44:07.424210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.280 [2024-10-01 13:44:07.424246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.280 [2024-10-01 13:44:07.424279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.280 [2024-10-01 13:44:07.424298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.280 [2024-10-01 13:44:07.424312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.280 [2024-10-01 13:44:07.424344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.280 [2024-10-01 13:44:07.427562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.280 [2024-10-01 13:44:07.427681] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.280 [2024-10-01 13:44:07.427724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.280 [2024-10-01 13:44:07.427744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.281 [2024-10-01 13:44:07.427779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.281 [2024-10-01 13:44:07.427811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.281 [2024-10-01 13:44:07.427830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.281 [2024-10-01 13:44:07.427844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.281 [2024-10-01 13:44:07.427875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.281 [2024-10-01 13:44:07.434819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.281 [2024-10-01 13:44:07.434938] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.281 [2024-10-01 13:44:07.434977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.281 [2024-10-01 13:44:07.434997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.281 [2024-10-01 13:44:07.435031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.281 [2024-10-01 13:44:07.435064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.281 [2024-10-01 13:44:07.435081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.281 [2024-10-01 13:44:07.435114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.281 [2024-10-01 13:44:07.435167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.281 [2024-10-01 13:44:07.438128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.281 [2024-10-01 13:44:07.438258] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.281 [2024-10-01 13:44:07.438297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.281 [2024-10-01 13:44:07.438317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.281 [2024-10-01 13:44:07.438351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.281 [2024-10-01 13:44:07.438384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.281 [2024-10-01 13:44:07.438402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.281 [2024-10-01 13:44:07.438417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.281 [2024-10-01 13:44:07.438449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.281 [2024-10-01 13:44:07.445723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.281 [2024-10-01 13:44:07.445844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.281 [2024-10-01 13:44:07.445883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.281 [2024-10-01 13:44:07.445903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.281 [2024-10-01 13:44:07.445936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.281 [2024-10-01 13:44:07.445976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.281 [2024-10-01 13:44:07.445994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.281 [2024-10-01 13:44:07.446009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.281 [2024-10-01 13:44:07.446041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.281 [2024-10-01 13:44:07.448943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.281 [2024-10-01 13:44:07.449063] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.281 [2024-10-01 13:44:07.449105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.281 [2024-10-01 13:44:07.449128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.281 [2024-10-01 13:44:07.449176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.281 [2024-10-01 13:44:07.449211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.281 [2024-10-01 13:44:07.449228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.281 [2024-10-01 13:44:07.449242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.281 [2024-10-01 13:44:07.449276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.281 [2024-10-01 13:44:07.455827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.281 [2024-10-01 13:44:07.455980] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.281 [2024-10-01 13:44:07.456020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.281 [2024-10-01 13:44:07.456040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.281 [2024-10-01 13:44:07.456087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.281 [2024-10-01 13:44:07.456363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.281 [2024-10-01 13:44:07.456403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.281 [2024-10-01 13:44:07.456422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.281 [2024-10-01 13:44:07.456585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.281 [2024-10-01 13:44:07.459787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.281 [2024-10-01 13:44:07.459917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.281 [2024-10-01 13:44:07.459957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.281 [2024-10-01 13:44:07.459976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.281 [2024-10-01 13:44:07.460011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.281 [2024-10-01 13:44:07.460044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.281 [2024-10-01 13:44:07.460061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.281 [2024-10-01 13:44:07.460075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.281 [2024-10-01 13:44:07.460107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.281 [2024-10-01 13:44:07.466598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.281 [2024-10-01 13:44:07.467905] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.281 [2024-10-01 13:44:07.467972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.281 [2024-10-01 13:44:07.467997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.281 [2024-10-01 13:44:07.468265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.281 [2024-10-01 13:44:07.469472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.281 [2024-10-01 13:44:07.469517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.281 [2024-10-01 13:44:07.469549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.281 [2024-10-01 13:44:07.470232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.281 [2024-10-01 13:44:07.470761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.281 [2024-10-01 13:44:07.470981] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.281 [2024-10-01 13:44:07.471031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.281 [2024-10-01 13:44:07.471056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.281 [2024-10-01 13:44:07.471101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.281 [2024-10-01 13:44:07.471192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.281 [2024-10-01 13:44:07.471230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.281 [2024-10-01 13:44:07.471249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.281 [2024-10-01 13:44:07.471288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.281 [2024-10-01 13:44:07.477068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.281 [2024-10-01 13:44:07.477198] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.281 [2024-10-01 13:44:07.477242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.281 [2024-10-01 13:44:07.477263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.281 [2024-10-01 13:44:07.477298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.281 [2024-10-01 13:44:07.477331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.282 [2024-10-01 13:44:07.477349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.282 [2024-10-01 13:44:07.477364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.282 [2024-10-01 13:44:07.477396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.282 [2024-10-01 13:44:07.481989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.282 [2024-10-01 13:44:07.482108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.282 [2024-10-01 13:44:07.482159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.282 [2024-10-01 13:44:07.482183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.282 [2024-10-01 13:44:07.483263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.282 [2024-10-01 13:44:07.483940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.282 [2024-10-01 13:44:07.483980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.282 [2024-10-01 13:44:07.483998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.282 [2024-10-01 13:44:07.484105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.282 [2024-10-01 13:44:07.487929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.282 [2024-10-01 13:44:07.488048] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.282 [2024-10-01 13:44:07.488090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.282 [2024-10-01 13:44:07.488111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.282 [2024-10-01 13:44:07.488154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.282 [2024-10-01 13:44:07.488194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.282 [2024-10-01 13:44:07.488212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.282 [2024-10-01 13:44:07.488227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.282 [2024-10-01 13:44:07.488277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.282 [2024-10-01 13:44:07.493440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.282 [2024-10-01 13:44:07.494303] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.282 [2024-10-01 13:44:07.494350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.282 [2024-10-01 13:44:07.494372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.282 [2024-10-01 13:44:07.494584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.282 [2024-10-01 13:44:07.494682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.282 [2024-10-01 13:44:07.494706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.282 [2024-10-01 13:44:07.494720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.282 [2024-10-01 13:44:07.494755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.282 [2024-10-01 13:44:07.498026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.282 [2024-10-01 13:44:07.498155] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.282 [2024-10-01 13:44:07.498200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.282 [2024-10-01 13:44:07.498220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.282 [2024-10-01 13:44:07.498256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.282 [2024-10-01 13:44:07.498289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.282 [2024-10-01 13:44:07.498307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.282 [2024-10-01 13:44:07.498321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.282 [2024-10-01 13:44:07.498598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.282 [2024-10-01 13:44:07.503679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.282 [2024-10-01 13:44:07.503798] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.282 [2024-10-01 13:44:07.503836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.282 [2024-10-01 13:44:07.503855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.282 [2024-10-01 13:44:07.503902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.282 [2024-10-01 13:44:07.503938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.282 [2024-10-01 13:44:07.503955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.282 [2024-10-01 13:44:07.503970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.282 [2024-10-01 13:44:07.504003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.282 [2024-10-01 13:44:07.508508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.282 [2024-10-01 13:44:07.508645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.282 [2024-10-01 13:44:07.508679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.282 [2024-10-01 13:44:07.508715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.282 [2024-10-01 13:44:07.508751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.282 [2024-10-01 13:44:07.508784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.282 [2024-10-01 13:44:07.508802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.282 [2024-10-01 13:44:07.508816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.282 [2024-10-01 13:44:07.508848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.283 [2024-10-01 13:44:07.513776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.283 [2024-10-01 13:44:07.515201] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.283 [2024-10-01 13:44:07.515248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.283 [2024-10-01 13:44:07.515269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.283 [2024-10-01 13:44:07.516291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.283 [2024-10-01 13:44:07.516446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.283 [2024-10-01 13:44:07.516482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.283 [2024-10-01 13:44:07.516500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.283 [2024-10-01 13:44:07.516553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.283 [2024-10-01 13:44:07.519423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.283 [2024-10-01 13:44:07.519567] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.283 [2024-10-01 13:44:07.519604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.283 [2024-10-01 13:44:07.519623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.283 [2024-10-01 13:44:07.519659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.283 [2024-10-01 13:44:07.519692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.283 [2024-10-01 13:44:07.519710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.283 [2024-10-01 13:44:07.519725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.283 [2024-10-01 13:44:07.519758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.283 [2024-10-01 13:44:07.526573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.283 [2024-10-01 13:44:07.528031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.283 [2024-10-01 13:44:07.528089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.283 [2024-10-01 13:44:07.528116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.283 [2024-10-01 13:44:07.529169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.283 [2024-10-01 13:44:07.529427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.283 [2024-10-01 13:44:07.529490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.283 [2024-10-01 13:44:07.529513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.283 [2024-10-01 13:44:07.529675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.283 [2024-10-01 13:44:07.532438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.283 [2024-10-01 13:44:07.533583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.283 [2024-10-01 13:44:07.533637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.283 [2024-10-01 13:44:07.533663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.283 [2024-10-01 13:44:07.534970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.283 [2024-10-01 13:44:07.535235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.283 [2024-10-01 13:44:07.535278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.283 [2024-10-01 13:44:07.535299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.283 [2024-10-01 13:44:07.536690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.283 [2024-10-01 13:44:07.538032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.283 [2024-10-01 13:44:07.539059] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.283 [2024-10-01 13:44:07.539108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.283 [2024-10-01 13:44:07.539136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.283 [2024-10-01 13:44:07.540411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.283 [2024-10-01 13:44:07.540783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.283 [2024-10-01 13:44:07.540823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.283 [2024-10-01 13:44:07.540841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.283 [2024-10-01 13:44:07.540915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.283 [2024-10-01 13:44:07.542567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.283 [2024-10-01 13:44:07.542683] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.283 [2024-10-01 13:44:07.542726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.283 [2024-10-01 13:44:07.542747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.283 [2024-10-01 13:44:07.542781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.283 [2024-10-01 13:44:07.542813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.283 [2024-10-01 13:44:07.542831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.283 [2024-10-01 13:44:07.542845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.283 [2024-10-01 13:44:07.544087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.283 [2024-10-01 13:44:07.548738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.283 [2024-10-01 13:44:07.548890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.283 [2024-10-01 13:44:07.548926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.283 [2024-10-01 13:44:07.548945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.283 [2024-10-01 13:44:07.548980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.283 [2024-10-01 13:44:07.549013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.283 [2024-10-01 13:44:07.549031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.283 [2024-10-01 13:44:07.549047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.283 [2024-10-01 13:44:07.549080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.283 [2024-10-01 13:44:07.553839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.283 [2024-10-01 13:44:07.553993] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.283 [2024-10-01 13:44:07.554033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.283 [2024-10-01 13:44:07.554054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.283 [2024-10-01 13:44:07.554089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.283 [2024-10-01 13:44:07.555193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.283 [2024-10-01 13:44:07.555237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.283 [2024-10-01 13:44:07.555256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.283 [2024-10-01 13:44:07.555917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.283 [2024-10-01 13:44:07.559955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.283 [2024-10-01 13:44:07.560074] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.283 [2024-10-01 13:44:07.560107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.283 [2024-10-01 13:44:07.560126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.283 [2024-10-01 13:44:07.560159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.283 [2024-10-01 13:44:07.560192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.283 [2024-10-01 13:44:07.560209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.283 [2024-10-01 13:44:07.560224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.283 [2024-10-01 13:44:07.560257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.283 [2024-10-01 13:44:07.563951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.283 [2024-10-01 13:44:07.564065] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.283 [2024-10-01 13:44:07.564096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.283 [2024-10-01 13:44:07.564114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.283 [2024-10-01 13:44:07.564172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.283 [2024-10-01 13:44:07.564205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.283 [2024-10-01 13:44:07.564222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.283 [2024-10-01 13:44:07.564237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.283 [2024-10-01 13:44:07.564269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.283 [2024-10-01 13:44:07.570528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.283 [2024-10-01 13:44:07.570693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.283 [2024-10-01 13:44:07.570743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.284 [2024-10-01 13:44:07.570765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.284 [2024-10-01 13:44:07.570800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.284 [2024-10-01 13:44:07.570833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.284 [2024-10-01 13:44:07.570851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.284 [2024-10-01 13:44:07.570865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.284 [2024-10-01 13:44:07.570898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.284 [2024-10-01 13:44:07.574607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.284 [2024-10-01 13:44:07.574724] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.284 [2024-10-01 13:44:07.574758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.284 [2024-10-01 13:44:07.574776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.284 [2024-10-01 13:44:07.574814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.284 [2024-10-01 13:44:07.574848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.284 [2024-10-01 13:44:07.574865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.284 [2024-10-01 13:44:07.574879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.284 [2024-10-01 13:44:07.574911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.284 [2024-10-01 13:44:07.581508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.284 [2024-10-01 13:44:07.581639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.284 [2024-10-01 13:44:07.581682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.284 [2024-10-01 13:44:07.581702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.284 [2024-10-01 13:44:07.581736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.284 [2024-10-01 13:44:07.581769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.284 [2024-10-01 13:44:07.581787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.284 [2024-10-01 13:44:07.581819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.284 [2024-10-01 13:44:07.581855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.284 [2024-10-01 13:44:07.585051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.284 [2024-10-01 13:44:07.585173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.284 [2024-10-01 13:44:07.585215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.284 [2024-10-01 13:44:07.585236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.284 [2024-10-01 13:44:07.585271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.284 [2024-10-01 13:44:07.585304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.284 [2024-10-01 13:44:07.585322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.284 [2024-10-01 13:44:07.585336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.284 [2024-10-01 13:44:07.585369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.284 [2024-10-01 13:44:07.592634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.284 [2024-10-01 13:44:07.592752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.284 [2024-10-01 13:44:07.592785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.284 [2024-10-01 13:44:07.592803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.284 [2024-10-01 13:44:07.592837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.284 [2024-10-01 13:44:07.592869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.284 [2024-10-01 13:44:07.592887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.284 [2024-10-01 13:44:07.592901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.284 [2024-10-01 13:44:07.592933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.284 [2024-10-01 13:44:07.595151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.284 [2024-10-01 13:44:07.596024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.284 [2024-10-01 13:44:07.596069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.284 [2024-10-01 13:44:07.596090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.284 [2024-10-01 13:44:07.596271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.284 [2024-10-01 13:44:07.596367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.284 [2024-10-01 13:44:07.596393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.284 [2024-10-01 13:44:07.596409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.284 [2024-10-01 13:44:07.596442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.284 [2024-10-01 13:44:07.604017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.284 [2024-10-01 13:44:07.604251] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.284 [2024-10-01 13:44:07.604288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.284 [2024-10-01 13:44:07.604308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.284 [2024-10-01 13:44:07.604346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.284 [2024-10-01 13:44:07.604380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.284 [2024-10-01 13:44:07.604398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.284 [2024-10-01 13:44:07.604414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.284 [2024-10-01 13:44:07.604448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.284 [2024-10-01 13:44:07.605886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.284 [2024-10-01 13:44:07.606000] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.284 [2024-10-01 13:44:07.606046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.284 [2024-10-01 13:44:07.606066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.284 [2024-10-01 13:44:07.606099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.284 [2024-10-01 13:44:07.606130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.284 [2024-10-01 13:44:07.606148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.284 [2024-10-01 13:44:07.606162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.284 [2024-10-01 13:44:07.606194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.284 [2024-10-01 13:44:07.614421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.284 [2024-10-01 13:44:07.614606] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.284 [2024-10-01 13:44:07.614641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.284 [2024-10-01 13:44:07.614661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.284 [2024-10-01 13:44:07.614697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.284 [2024-10-01 13:44:07.614730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.284 [2024-10-01 13:44:07.614748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.284 [2024-10-01 13:44:07.614763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.284 [2024-10-01 13:44:07.614796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.284 [2024-10-01 13:44:07.615981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.284 [2024-10-01 13:44:07.616092] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.284 [2024-10-01 13:44:07.616134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.284 [2024-10-01 13:44:07.616154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.284 [2024-10-01 13:44:07.616187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.284 [2024-10-01 13:44:07.617552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.284 [2024-10-01 13:44:07.617591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.284 [2024-10-01 13:44:07.617609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.284 [2024-10-01 13:44:07.618515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.284 [2024-10-01 13:44:07.625097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.284 [2024-10-01 13:44:07.625213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.284 [2024-10-01 13:44:07.625256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.284 [2024-10-01 13:44:07.625276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.284 [2024-10-01 13:44:07.625310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.284 [2024-10-01 13:44:07.625342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.285 [2024-10-01 13:44:07.625359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.285 [2024-10-01 13:44:07.625373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.285 [2024-10-01 13:44:07.625405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.285 [2024-10-01 13:44:07.626743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.285 [2024-10-01 13:44:07.626856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.285 [2024-10-01 13:44:07.626887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.285 [2024-10-01 13:44:07.626905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.285 [2024-10-01 13:44:07.627984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.285 [2024-10-01 13:44:07.628647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.285 [2024-10-01 13:44:07.628686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.285 [2024-10-01 13:44:07.628704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.285 [2024-10-01 13:44:07.628791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.285 [2024-10-01 13:44:07.635900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.285 [2024-10-01 13:44:07.636015] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.285 [2024-10-01 13:44:07.636047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.285 [2024-10-01 13:44:07.636065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.285 [2024-10-01 13:44:07.636099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.285 [2024-10-01 13:44:07.636131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.285 [2024-10-01 13:44:07.636148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.285 [2024-10-01 13:44:07.636163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.285 [2024-10-01 13:44:07.636213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.285 [2024-10-01 13:44:07.636833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.285 [2024-10-01 13:44:07.638116] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.285 [2024-10-01 13:44:07.638160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.285 [2024-10-01 13:44:07.638180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.285 [2024-10-01 13:44:07.638397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.285 [2024-10-01 13:44:07.639180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.285 [2024-10-01 13:44:07.639218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.285 [2024-10-01 13:44:07.639236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.285 [2024-10-01 13:44:07.639443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.285 [2024-10-01 13:44:07.646778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.285 [2024-10-01 13:44:07.646896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.285 [2024-10-01 13:44:07.646937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.285 [2024-10-01 13:44:07.646957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.285 [2024-10-01 13:44:07.646993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.285 [2024-10-01 13:44:07.647039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.285 [2024-10-01 13:44:07.647060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.285 [2024-10-01 13:44:07.647075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.285 [2024-10-01 13:44:07.647108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.285 [2024-10-01 13:44:07.647132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.285 [2024-10-01 13:44:07.647209] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.285 [2024-10-01 13:44:07.647237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.285 [2024-10-01 13:44:07.647255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.285 [2024-10-01 13:44:07.647287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.285 [2024-10-01 13:44:07.647318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.285 [2024-10-01 13:44:07.647335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.285 [2024-10-01 13:44:07.647350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.285 [2024-10-01 13:44:07.647381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.285 [2024-10-01 13:44:07.656929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.285 [2024-10-01 13:44:07.657051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.285 [2024-10-01 13:44:07.657094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.285 [2024-10-01 13:44:07.657135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.285 [2024-10-01 13:44:07.657172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.285 [2024-10-01 13:44:07.657221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.285 [2024-10-01 13:44:07.657243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.285 [2024-10-01 13:44:07.657258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.285 [2024-10-01 13:44:07.657292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.285 [2024-10-01 13:44:07.657316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.285 [2024-10-01 13:44:07.657640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.285 [2024-10-01 13:44:07.657683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.285 [2024-10-01 13:44:07.657702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.285 [2024-10-01 13:44:07.657866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.285 [2024-10-01 13:44:07.658015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.285 [2024-10-01 13:44:07.658060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.285 [2024-10-01 13:44:07.658090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.285 [2024-10-01 13:44:07.658137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.285 8682.80 IOPS, 33.92 MiB/s [2024-10-01 13:44:07.668614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.285 [2024-10-01 13:44:07.668671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.285 [2024-10-01 13:44:07.669144] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.285 [2024-10-01 13:44:07.669191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.285 [2024-10-01 13:44:07.669213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.285 [2024-10-01 13:44:07.669285] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.285 [2024-10-01 13:44:07.669319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.285 [2024-10-01 13:44:07.669338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.285 [2024-10-01 13:44:07.669487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.285 [2024-10-01 13:44:07.669518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.285 [2024-10-01 13:44:07.669641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.285 [2024-10-01 13:44:07.669674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.285 [2024-10-01 13:44:07.669692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.285 [2024-10-01 13:44:07.669710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.285 [2024-10-01 13:44:07.669748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.285 [2024-10-01 13:44:07.669764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.285 [2024-10-01 13:44:07.669879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.285 [2024-10-01 13:44:07.669902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.285 00:16:17.285 Latency(us) 00:16:17.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.285 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:17.285 Verification LBA range: start 0x0 length 0x4000 00:16:17.285 NVMe0n1 : 15.01 8683.21 33.92 0.00 0.00 14706.96 2055.45 19184.17 00:16:17.285 =================================================================================================================== 00:16:17.285 Total : 8683.21 33.92 0.00 0.00 14706.96 2055.45 19184.17 00:16:17.285 [2024-10-01 13:44:07.678732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.285 [2024-10-01 13:44:07.678812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.285 [2024-10-01 13:44:07.678911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.286 [2024-10-01 13:44:07.678951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.286 [2024-10-01 13:44:07.678972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.286 [2024-10-01 13:44:07.679030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.286 [2024-10-01 13:44:07.679062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.286 [2024-10-01 13:44:07.679085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.286 [2024-10-01 13:44:07.679106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.286 [2024-10-01 13:44:07.679127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.286 [2024-10-01 13:44:07.679145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.286 [2024-10-01 13:44:07.679160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.286 [2024-10-01 13:44:07.679174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.286 [2024-10-01 13:44:07.679201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.286 [2024-10-01 13:44:07.679217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.286 [2024-10-01 13:44:07.679231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.286 [2024-10-01 13:44:07.679244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.286 [2024-10-01 13:44:07.679261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.286 [2024-10-01 13:44:07.688826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.286 [2024-10-01 13:44:07.688929] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.286 [2024-10-01 13:44:07.688960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.286 [2024-10-01 13:44:07.688978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.286 [2024-10-01 13:44:07.689034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.286 [2024-10-01 13:44:07.689061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.286 [2024-10-01 13:44:07.689089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.286 [2024-10-01 13:44:07.689106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.286 [2024-10-01 13:44:07.689120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.286 [2024-10-01 13:44:07.689137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.286 [2024-10-01 13:44:07.689193] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.286 [2024-10-01 13:44:07.689220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.286 [2024-10-01 13:44:07.689237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.286 [2024-10-01 13:44:07.689257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.286 [2024-10-01 13:44:07.689277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.286 [2024-10-01 13:44:07.689292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.286 [2024-10-01 13:44:07.689306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.286 [2024-10-01 13:44:07.689324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.286 [2024-10-01 13:44:07.698896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.286 [2024-10-01 13:44:07.698999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.286 [2024-10-01 13:44:07.699029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.286 [2024-10-01 13:44:07.699047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.286 [2024-10-01 13:44:07.699068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.286 [2024-10-01 13:44:07.699088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.286 [2024-10-01 13:44:07.699104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.286 [2024-10-01 13:44:07.699118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.286 [2024-10-01 13:44:07.699145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.286 [2024-10-01 13:44:07.699168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.286 [2024-10-01 13:44:07.699237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.286 [2024-10-01 13:44:07.699264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.286 [2024-10-01 13:44:07.699281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.286 [2024-10-01 13:44:07.699301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.286 [2024-10-01 13:44:07.699321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.286 [2024-10-01 13:44:07.699335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.286 [2024-10-01 13:44:07.699364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.286 [2024-10-01 13:44:07.699385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.286 [2024-10-01 13:44:07.708990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.286 [2024-10-01 13:44:07.709200] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.286 [2024-10-01 13:44:07.709236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.286 [2024-10-01 13:44:07.709264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.286 [2024-10-01 13:44:07.709299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.286 [2024-10-01 13:44:07.709335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.286 [2024-10-01 13:44:07.709355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.286 [2024-10-01 13:44:07.709372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.286 [2024-10-01 13:44:07.709401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.286 [2024-10-01 13:44:07.709425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.286 [2024-10-01 13:44:07.709492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.286 [2024-10-01 13:44:07.709520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.286 [2024-10-01 13:44:07.709554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.286 [2024-10-01 13:44:07.709578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.286 [2024-10-01 13:44:07.709599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.286 [2024-10-01 13:44:07.709614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.286 [2024-10-01 13:44:07.709628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.286 [2024-10-01 13:44:07.709647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.286 [2024-10-01 13:44:07.719124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.286 [2024-10-01 13:44:07.719315] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.286 [2024-10-01 13:44:07.719349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.286 [2024-10-01 13:44:07.719368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.286 [2024-10-01 13:44:07.719393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.286 [2024-10-01 13:44:07.719418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.286 [2024-10-01 13:44:07.719435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.286 [2024-10-01 13:44:07.719453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.287 [2024-10-01 13:44:07.719472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.287 [2024-10-01 13:44:07.719503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.287 [2024-10-01 13:44:07.719615] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.287 [2024-10-01 13:44:07.719644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.287 [2024-10-01 13:44:07.719661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.287 [2024-10-01 13:44:07.719681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.287 [2024-10-01 13:44:07.719701] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.287 [2024-10-01 13:44:07.719716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.287 [2024-10-01 13:44:07.719730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.287 [2024-10-01 13:44:07.719749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.287 Received shutdown signal, test time was about 15.000000 seconds 00:16:17.287 00:16:17.287 Latency(us) 00:16:17.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.287 =================================================================================================================== 00:16:17.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=1 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@68 -- # false 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@68 -- # trap - ERR 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@68 -- # print_backtrace 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # args=('--transport=tcp') 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # local args 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1157 -- # xtrace_disable 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:17.287 ========== Backtrace start: ========== 00:16:17.287 00:16:17.287 in /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh:68 -> main(["--transport=tcp"]) 00:16:17.287 ... 00:16:17.287 63 cat $testdir/try.txt 00:16:17.287 64 # if this test fails it means we didn't fail over to the second 00:16:17.287 65 count="$(grep -c "Resetting controller successful" < $testdir/try.txt)" 00:16:17.287 66 00:16:17.287 67 if ((count != 3)); then 00:16:17.287 => 68 false 00:16:17.287 69 fi 00:16:17.287 70 00:16:17.287 71 # Part 2 of the test. Start removing ports, starting with the one we are connected to, confirm that the ctrlr remains active until the final trid is removed. 00:16:17.287 72 $rootdir/build/examples/bdevperf -z -r $bdevperf_rpc_sock -q 128 -o 4096 -w verify -t 1 -f &> $testdir/try.txt & 00:16:17.287 73 bdevperf_pid=$! 00:16:17.287 ... 00:16:17.287 00:16:17.287 ========== Backtrace end ========== 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1194 -- # return 0 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # process_shm --id 0 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@808 -- # type=--id 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@809 -- # id=0 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@820 -- # for n in $shm_files 00:16:17.287 13:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:17.287 nvmf_trace.0 00:16:17.287 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@823 -- # return 0 00:16:17.287 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:17.287 [2024-10-01 13:43:50.775476] Starting SPDK v25.01-pre git sha1 7b38c9ede / DPDK 24.03.0 initialization... 00:16:17.287 [2024-10-01 13:43:50.775690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75429 ] 00:16:17.287 [2024-10-01 13:43:50.926782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.287 [2024-10-01 13:43:50.993285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.287 [2024-10-01 13:43:51.030256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:17.287 Running I/O for 15 seconds... 00:16:17.287 7136.00 IOPS, 27.88 MiB/s [2024-10-01 13:43:53.783470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.287 [2024-10-01 13:43:53.783561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.783597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.287 [2024-10-01 13:43:53.783615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.783632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.287 [2024-10-01 13:43:53.783647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.783663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.287 [2024-10-01 13:43:53.783678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.783694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.287 [2024-10-01 13:43:53.783709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.783725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.287 [2024-10-01 13:43:53.783739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.783755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.287 [2024-10-01 13:43:53.783769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.783785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.287 [2024-10-01 13:43:53.783799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.783816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.287 [2024-10-01 13:43:53.783830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.783846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.287 [2024-10-01 13:43:53.783861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.783893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.287 [2024-10-01 13:43:53.783941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.783959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.287 [2024-10-01 13:43:53.783974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.783996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.287 [2024-10-01 13:43:53.784013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.784035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.287 [2024-10-01 13:43:53.784052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.784068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.287 [2024-10-01 13:43:53.784083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.784099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.287 [2024-10-01 13:43:53.784121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.784137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.287 [2024-10-01 13:43:53.784152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.784169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.287 [2024-10-01 13:43:53.784184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.287 [2024-10-01 13:43:53.784200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.287 [2024-10-01 13:43:53.784214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.784244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.784274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.784303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.784344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.784386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.784417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.784449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.784478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.784509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.784553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.784586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.784627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.784666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.288 [2024-10-01 13:43:53.784698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.288 [2024-10-01 13:43:53.784731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.288 [2024-10-01 13:43:53.784769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.288 [2024-10-01 13:43:53.784808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.288 [2024-10-01 13:43:53.784841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.288 [2024-10-01 13:43:53.784870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.288 [2024-10-01 13:43:53.784900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.288 [2024-10-01 13:43:53.784930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.784961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.784977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.784992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.785022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.785052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.785082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.785112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.785142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.785172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.785219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.785272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.785322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.785376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.785433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.785476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.785513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.288 [2024-10-01 13:43:53.785562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.288 [2024-10-01 13:43:53.785595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.288 [2024-10-01 13:43:53.785624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.288 [2024-10-01 13:43:53.785655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.288 [2024-10-01 13:43:53.785671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.785685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.785701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.785715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.785744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.785760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.785776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.785790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.785806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.785820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.785836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.785850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.785866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.785880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.785896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.785911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.785927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.785942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.785957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.785971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.785987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.289 [2024-10-01 13:43:53.786622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.786654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.786685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.786716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.786749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.786780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.786811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.786854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.786884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.786914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.786944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.786960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.289 [2024-10-01 13:43:53.786974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.289 [2024-10-01 13:43:53.787000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.290 [2024-10-01 13:43:53.787015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.290 [2024-10-01 13:43:53.787046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.290 [2024-10-01 13:43:53.787076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.290 [2024-10-01 13:43:53.787110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.290 [2024-10-01 13:43:53.787146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.290 [2024-10-01 13:43:53.787185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.290 [2024-10-01 13:43:53.787215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.290 [2024-10-01 13:43:53.787246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.290 [2024-10-01 13:43:53.787276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.290 [2024-10-01 13:43:53.787307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.290 [2024-10-01 13:43:53.787338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.290 [2024-10-01 13:43:53.787368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.290 [2024-10-01 13:43:53.787406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.290 [2024-10-01 13:43:53.787438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.290 [2024-10-01 13:43:53.787469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.290 [2024-10-01 13:43:53.787499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.290 [2024-10-01 13:43:53.787529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.290 [2024-10-01 13:43:53.787592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.290 [2024-10-01 13:43:53.787624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.290 [2024-10-01 13:43:53.787658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e770 is same with the state(6) to be set 00:16:17.290 [2024-10-01 13:43:53.787692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.290 [2024-10-01 13:43:53.787703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.290 [2024-10-01 13:43:53.787714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65976 len:8 PRP1 0x0 PRP2 0x0 00:16:17.290 [2024-10-01 13:43:53.787728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.290 [2024-10-01 13:43:53.787754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.290 [2024-10-01 13:43:53.787766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66304 len:8 PRP1 0x0 PRP2 0x0 00:16:17.290 [2024-10-01 13:43:53.787780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.290 [2024-10-01 13:43:53.787807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.290 [2024-10-01 13:43:53.787824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66312 len:8 PRP1 0x0 PRP2 0x0 00:16:17.290 [2024-10-01 13:43:53.787850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.290 [2024-10-01 13:43:53.787893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.290 [2024-10-01 13:43:53.787904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66320 len:8 PRP1 0x0 PRP2 0x0 00:16:17.290 [2024-10-01 13:43:53.787918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.290 [2024-10-01 13:43:53.787944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.290 [2024-10-01 13:43:53.787954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66328 len:8 PRP1 0x0 PRP2 0x0 00:16:17.290 [2024-10-01 13:43:53.787967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.787982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.290 [2024-10-01 13:43:53.787993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.290 [2024-10-01 13:43:53.788007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66336 len:8 PRP1 0x0 PRP2 0x0 00:16:17.290 [2024-10-01 13:43:53.788027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.788046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.290 [2024-10-01 13:43:53.788063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.290 [2024-10-01 13:43:53.788082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66344 len:8 PRP1 0x0 PRP2 0x0 00:16:17.290 [2024-10-01 13:43:53.788107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.788132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.290 [2024-10-01 13:43:53.788149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.290 [2024-10-01 13:43:53.788169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66352 len:8 PRP1 0x0 PRP2 0x0 00:16:17.290 [2024-10-01 13:43:53.788188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.788207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.290 [2024-10-01 13:43:53.788227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.290 [2024-10-01 13:43:53.788239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66360 len:8 PRP1 0x0 PRP2 0x0 00:16:17.290 [2024-10-01 13:43:53.788253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.788305] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb3e770 was disconnected and freed. reset controller. 00:16:17.290 [2024-10-01 13:43:53.788435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.290 [2024-10-01 13:43:53.788464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.788481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.290 [2024-10-01 13:43:53.788495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.788524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.290 [2024-10-01 13:43:53.788560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.788584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.290 [2024-10-01 13:43:53.788598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.290 [2024-10-01 13:43:53.788612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.291 [2024-10-01 13:43:53.789679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.291 [2024-10-01 13:43:53.789723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.291 [2024-10-01 13:43:53.790174] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.291 [2024-10-01 13:43:53.790209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.291 [2024-10-01 13:43:53.790228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.291 [2024-10-01 13:43:53.790264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.291 [2024-10-01 13:43:53.790298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.291 [2024-10-01 13:43:53.790315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.291 [2024-10-01 13:43:53.790331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.291 [2024-10-01 13:43:53.790367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.291 [2024-10-01 13:43:53.801348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.291 [2024-10-01 13:43:53.801487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.291 [2024-10-01 13:43:53.801521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.291 [2024-10-01 13:43:53.801558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.291 [2024-10-01 13:43:53.801775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.291 [2024-10-01 13:43:53.801927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.291 [2024-10-01 13:43:53.801956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.291 [2024-10-01 13:43:53.801973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.291 [2024-10-01 13:43:53.802031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.291 [2024-10-01 13:43:53.812803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.291 [2024-10-01 13:43:53.812946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.291 [2024-10-01 13:43:53.812981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.291 [2024-10-01 13:43:53.813000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.291 [2024-10-01 13:43:53.813036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.291 [2024-10-01 13:43:53.813094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.291 [2024-10-01 13:43:53.813114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.291 [2024-10-01 13:43:53.813129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.291 [2024-10-01 13:43:53.813162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.291 [2024-10-01 13:43:53.822937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.291 [2024-10-01 13:43:53.823187] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.291 [2024-10-01 13:43:53.823230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.291 [2024-10-01 13:43:53.823253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.291 [2024-10-01 13:43:53.823296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.291 [2024-10-01 13:43:53.823331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.291 [2024-10-01 13:43:53.823349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.291 [2024-10-01 13:43:53.823366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.291 [2024-10-01 13:43:53.823400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.291 [2024-10-01 13:43:53.834519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.291 [2024-10-01 13:43:53.834750] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.291 [2024-10-01 13:43:53.834789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.291 [2024-10-01 13:43:53.834809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.291 [2024-10-01 13:43:53.835620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.291 [2024-10-01 13:43:53.835829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.291 [2024-10-01 13:43:53.835882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.291 [2024-10-01 13:43:53.835907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.291 [2024-10-01 13:43:53.835962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.291 [2024-10-01 13:43:53.844705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.291 [2024-10-01 13:43:53.844905] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.291 [2024-10-01 13:43:53.844944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.291 [2024-10-01 13:43:53.844964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.291 [2024-10-01 13:43:53.845003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.291 [2024-10-01 13:43:53.845050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.291 [2024-10-01 13:43:53.845068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.291 [2024-10-01 13:43:53.845084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.291 [2024-10-01 13:43:53.845166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.291 [2024-10-01 13:43:53.854866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.291 [2024-10-01 13:43:53.856310] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.291 [2024-10-01 13:43:53.856363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.291 [2024-10-01 13:43:53.856386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.291 [2024-10-01 13:43:53.857278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.291 [2024-10-01 13:43:53.857443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.291 [2024-10-01 13:43:53.857483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.291 [2024-10-01 13:43:53.857503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.291 [2024-10-01 13:43:53.857558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.291 [2024-10-01 13:43:53.866479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.291 [2024-10-01 13:43:53.866718] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.291 [2024-10-01 13:43:53.866757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.291 [2024-10-01 13:43:53.866778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.291 [2024-10-01 13:43:53.866819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.291 [2024-10-01 13:43:53.866857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.291 [2024-10-01 13:43:53.866875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.291 [2024-10-01 13:43:53.866892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.291 [2024-10-01 13:43:53.866927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.291 [2024-10-01 13:43:53.878286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.291 [2024-10-01 13:43:53.879098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.291 [2024-10-01 13:43:53.879148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.291 [2024-10-01 13:43:53.879172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.291 [2024-10-01 13:43:53.879286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.291 [2024-10-01 13:43:53.879329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.291 [2024-10-01 13:43:53.879348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.291 [2024-10-01 13:43:53.879363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.291 [2024-10-01 13:43:53.879399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.291 [2024-10-01 13:43:53.889370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.292 [2024-10-01 13:43:53.889509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.292 [2024-10-01 13:43:53.889556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.292 [2024-10-01 13:43:53.889609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.292 [2024-10-01 13:43:53.889647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.292 [2024-10-01 13:43:53.889681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.292 [2024-10-01 13:43:53.889699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.292 [2024-10-01 13:43:53.889714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.292 [2024-10-01 13:43:53.889746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.292 [2024-10-01 13:43:53.900771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.292 [2024-10-01 13:43:53.900936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.292 [2024-10-01 13:43:53.900986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.292 [2024-10-01 13:43:53.901008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.292 [2024-10-01 13:43:53.901044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.292 [2024-10-01 13:43:53.901077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.292 [2024-10-01 13:43:53.901095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.292 [2024-10-01 13:43:53.901110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.292 [2024-10-01 13:43:53.901143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.292 [2024-10-01 13:43:53.911285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.292 [2024-10-01 13:43:53.911488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.292 [2024-10-01 13:43:53.911525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.292 [2024-10-01 13:43:53.911565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.292 [2024-10-01 13:43:53.912562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.292 [2024-10-01 13:43:53.912797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.292 [2024-10-01 13:43:53.912834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.292 [2024-10-01 13:43:53.912854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.292 [2024-10-01 13:43:53.912946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.292 [2024-10-01 13:43:53.922358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.292 [2024-10-01 13:43:53.922509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.292 [2024-10-01 13:43:53.922565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.292 [2024-10-01 13:43:53.922589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.292 [2024-10-01 13:43:53.922627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.292 [2024-10-01 13:43:53.922661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.292 [2024-10-01 13:43:53.922704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.292 [2024-10-01 13:43:53.922721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.292 [2024-10-01 13:43:53.922755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.292 [2024-10-01 13:43:53.932854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.292 [2024-10-01 13:43:53.933079] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.292 [2024-10-01 13:43:53.933119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.292 [2024-10-01 13:43:53.933140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.292 [2024-10-01 13:43:53.933179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.292 [2024-10-01 13:43:53.933213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.292 [2024-10-01 13:43:53.933232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.292 [2024-10-01 13:43:53.933248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.292 [2024-10-01 13:43:53.933281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.292 [2024-10-01 13:43:53.944357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.292 [2024-10-01 13:43:53.944605] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.292 [2024-10-01 13:43:53.944646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.292 [2024-10-01 13:43:53.944668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.292 [2024-10-01 13:43:53.944708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.292 [2024-10-01 13:43:53.944743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.292 [2024-10-01 13:43:53.944761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.292 [2024-10-01 13:43:53.944777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.292 [2024-10-01 13:43:53.944813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.292 [2024-10-01 13:43:53.955020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.292 [2024-10-01 13:43:53.955214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.292 [2024-10-01 13:43:53.955251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.292 [2024-10-01 13:43:53.955271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.292 [2024-10-01 13:43:53.955307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.292 [2024-10-01 13:43:53.956323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.292 [2024-10-01 13:43:53.956369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.292 [2024-10-01 13:43:53.956397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.292 [2024-10-01 13:43:53.956629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.292 [2024-10-01 13:43:53.966659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.292 [2024-10-01 13:43:53.966928] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.292 [2024-10-01 13:43:53.966967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.292 [2024-10-01 13:43:53.966987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.292 [2024-10-01 13:43:53.967026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.292 [2024-10-01 13:43:53.967080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.292 [2024-10-01 13:43:53.967102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.292 [2024-10-01 13:43:53.967118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.292 [2024-10-01 13:43:53.967153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.292 [2024-10-01 13:43:53.977583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.292 [2024-10-01 13:43:53.977769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.292 [2024-10-01 13:43:53.977807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.292 [2024-10-01 13:43:53.977827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.292 [2024-10-01 13:43:53.977863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.292 [2024-10-01 13:43:53.977897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.292 [2024-10-01 13:43:53.977915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.292 [2024-10-01 13:43:53.977931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.292 [2024-10-01 13:43:53.977965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.292 [2024-10-01 13:43:53.989026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.292 [2024-10-01 13:43:53.989337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.292 [2024-10-01 13:43:53.989384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.292 [2024-10-01 13:43:53.989406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.292 [2024-10-01 13:43:53.989450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.292 [2024-10-01 13:43:53.989486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.292 [2024-10-01 13:43:53.989505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.292 [2024-10-01 13:43:53.989519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.292 [2024-10-01 13:43:53.989569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.292 [2024-10-01 13:43:53.999418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.292 [2024-10-01 13:43:53.999567] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.292 [2024-10-01 13:43:53.999609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.292 [2024-10-01 13:43:53.999630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.293 [2024-10-01 13:43:53.999695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.293 [2024-10-01 13:43:54.000655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.293 [2024-10-01 13:43:54.000696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.293 [2024-10-01 13:43:54.000714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.293 [2024-10-01 13:43:54.000911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.293 [2024-10-01 13:43:54.010511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.293 [2024-10-01 13:43:54.010652] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.293 [2024-10-01 13:43:54.010687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.293 [2024-10-01 13:43:54.010707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.293 [2024-10-01 13:43:54.010747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.293 [2024-10-01 13:43:54.010780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.293 [2024-10-01 13:43:54.010798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.293 [2024-10-01 13:43:54.010812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.293 [2024-10-01 13:43:54.010845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.293 [2024-10-01 13:43:54.020795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.293 [2024-10-01 13:43:54.020964] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.293 [2024-10-01 13:43:54.021024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.293 [2024-10-01 13:43:54.021053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.293 [2024-10-01 13:43:54.021099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.293 [2024-10-01 13:43:54.021140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.293 [2024-10-01 13:43:54.021163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.293 [2024-10-01 13:43:54.021183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.293 [2024-10-01 13:43:54.021223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.293 [2024-10-01 13:43:54.032680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.293 [2024-10-01 13:43:54.033066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.293 [2024-10-01 13:43:54.033125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.293 [2024-10-01 13:43:54.033151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.293 [2024-10-01 13:43:54.033200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.293 [2024-10-01 13:43:54.033236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.293 [2024-10-01 13:43:54.033254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.293 [2024-10-01 13:43:54.033301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.293 [2024-10-01 13:43:54.033340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.293 [2024-10-01 13:43:54.043660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.293 [2024-10-01 13:43:54.043838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.293 [2024-10-01 13:43:54.043889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.293 [2024-10-01 13:43:54.043912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.293 [2024-10-01 13:43:54.044892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.293 [2024-10-01 13:43:54.045114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.293 [2024-10-01 13:43:54.045160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.293 [2024-10-01 13:43:54.045179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.293 [2024-10-01 13:43:54.045267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.293 [2024-10-01 13:43:54.055028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.293 [2024-10-01 13:43:54.055160] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.293 [2024-10-01 13:43:54.055201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.293 [2024-10-01 13:43:54.055221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.293 [2024-10-01 13:43:54.055256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.293 [2024-10-01 13:43:54.055289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.293 [2024-10-01 13:43:54.055306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.293 [2024-10-01 13:43:54.055320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.293 [2024-10-01 13:43:54.055362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.293 [2024-10-01 13:43:54.065446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.293 [2024-10-01 13:43:54.065593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.293 [2024-10-01 13:43:54.065629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.293 [2024-10-01 13:43:54.065649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.293 [2024-10-01 13:43:54.065685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.293 [2024-10-01 13:43:54.065718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.293 [2024-10-01 13:43:54.065737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.293 [2024-10-01 13:43:54.065751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.293 [2024-10-01 13:43:54.065784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.293 [2024-10-01 13:43:54.076727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.293 [2024-10-01 13:43:54.076869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.293 [2024-10-01 13:43:54.076942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.293 [2024-10-01 13:43:54.076965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.293 [2024-10-01 13:43:54.077016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.293 [2024-10-01 13:43:54.077053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.293 [2024-10-01 13:43:54.077071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.293 [2024-10-01 13:43:54.077086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.293 [2024-10-01 13:43:54.077129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.293 [2024-10-01 13:43:54.087096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.293 [2024-10-01 13:43:54.087223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.293 [2024-10-01 13:43:54.087258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.293 [2024-10-01 13:43:54.087277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.293 [2024-10-01 13:43:54.087312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.293 [2024-10-01 13:43:54.088296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.293 [2024-10-01 13:43:54.088340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.293 [2024-10-01 13:43:54.088358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.293 [2024-10-01 13:43:54.088573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.293 [2024-10-01 13:43:54.098074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.293 [2024-10-01 13:43:54.098216] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.293 [2024-10-01 13:43:54.098261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.293 [2024-10-01 13:43:54.098282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.293 [2024-10-01 13:43:54.098317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.293 [2024-10-01 13:43:54.098350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.293 [2024-10-01 13:43:54.098376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.293 [2024-10-01 13:43:54.098395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.293 [2024-10-01 13:43:54.098429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.293 [2024-10-01 13:43:54.108563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.293 [2024-10-01 13:43:54.108685] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.293 [2024-10-01 13:43:54.108733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.293 [2024-10-01 13:43:54.108759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.293 [2024-10-01 13:43:54.109027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.293 [2024-10-01 13:43:54.109131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.293 [2024-10-01 13:43:54.109155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.294 [2024-10-01 13:43:54.109171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.294 [2024-10-01 13:43:54.109204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.294 [2024-10-01 13:43:54.118703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.294 [2024-10-01 13:43:54.118921] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.294 [2024-10-01 13:43:54.118960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.294 [2024-10-01 13:43:54.118981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.294 [2024-10-01 13:43:54.120276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.294 [2024-10-01 13:43:54.121211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.294 [2024-10-01 13:43:54.121254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.294 [2024-10-01 13:43:54.121274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.294 [2024-10-01 13:43:54.121504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.294 [2024-10-01 13:43:54.128873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.294 [2024-10-01 13:43:54.129282] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.294 [2024-10-01 13:43:54.129331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.294 [2024-10-01 13:43:54.129353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.294 [2024-10-01 13:43:54.129509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.294 [2024-10-01 13:43:54.129655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.294 [2024-10-01 13:43:54.129686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.294 [2024-10-01 13:43:54.129704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.294 [2024-10-01 13:43:54.129747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.294 [2024-10-01 13:43:54.139007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.294 [2024-10-01 13:43:54.139211] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.294 [2024-10-01 13:43:54.139250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.294 [2024-10-01 13:43:54.139270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.294 [2024-10-01 13:43:54.139307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.294 [2024-10-01 13:43:54.139340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.294 [2024-10-01 13:43:54.139357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.294 [2024-10-01 13:43:54.139374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.294 [2024-10-01 13:43:54.139442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.294 [2024-10-01 13:43:54.149328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.294 [2024-10-01 13:43:54.149549] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.294 [2024-10-01 13:43:54.149587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.294 [2024-10-01 13:43:54.149607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.294 [2024-10-01 13:43:54.150396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.294 [2024-10-01 13:43:54.150617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.294 [2024-10-01 13:43:54.150653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.294 [2024-10-01 13:43:54.150672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.294 [2024-10-01 13:43:54.150717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.294 [2024-10-01 13:43:54.159550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.294 [2024-10-01 13:43:54.159769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.294 [2024-10-01 13:43:54.159814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.294 [2024-10-01 13:43:54.159835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.294 [2024-10-01 13:43:54.159903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.294 [2024-10-01 13:43:54.159961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.294 [2024-10-01 13:43:54.159984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.294 [2024-10-01 13:43:54.160000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.294 [2024-10-01 13:43:54.160034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.294 [2024-10-01 13:43:54.171135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.294 [2024-10-01 13:43:54.171359] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.294 [2024-10-01 13:43:54.171398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.294 [2024-10-01 13:43:54.171418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.294 [2024-10-01 13:43:54.171456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.294 [2024-10-01 13:43:54.171490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.294 [2024-10-01 13:43:54.171509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.294 [2024-10-01 13:43:54.171526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.294 [2024-10-01 13:43:54.171578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.294 [2024-10-01 13:43:54.182603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.294 [2024-10-01 13:43:54.182823] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.294 [2024-10-01 13:43:54.182862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.294 [2024-10-01 13:43:54.182916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.294 [2024-10-01 13:43:54.182957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.294 [2024-10-01 13:43:54.182992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.294 [2024-10-01 13:43:54.183010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.294 [2024-10-01 13:43:54.183025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.294 [2024-10-01 13:43:54.183059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.294 [2024-10-01 13:43:54.193094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.294 [2024-10-01 13:43:54.193349] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.294 [2024-10-01 13:43:54.193390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.294 [2024-10-01 13:43:54.193411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.294 [2024-10-01 13:43:54.194403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.294 [2024-10-01 13:43:54.194685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.294 [2024-10-01 13:43:54.194724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.294 [2024-10-01 13:43:54.194744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.294 [2024-10-01 13:43:54.194868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.294 [2024-10-01 13:43:54.204214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.294 [2024-10-01 13:43:54.204416] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.294 [2024-10-01 13:43:54.204452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.294 [2024-10-01 13:43:54.204472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.294 [2024-10-01 13:43:54.204509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.294 [2024-10-01 13:43:54.204560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.294 [2024-10-01 13:43:54.204581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.294 [2024-10-01 13:43:54.204598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.294 [2024-10-01 13:43:54.204632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.294 [2024-10-01 13:43:54.214457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.294 [2024-10-01 13:43:54.214610] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.294 [2024-10-01 13:43:54.214648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.294 [2024-10-01 13:43:54.214668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.294 [2024-10-01 13:43:54.214703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.294 [2024-10-01 13:43:54.214737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.294 [2024-10-01 13:43:54.214786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.294 [2024-10-01 13:43:54.214803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.294 [2024-10-01 13:43:54.214838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.294 [2024-10-01 13:43:54.225868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.295 [2024-10-01 13:43:54.226083] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.295 [2024-10-01 13:43:54.226122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.295 [2024-10-01 13:43:54.226143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.295 [2024-10-01 13:43:54.226180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.295 [2024-10-01 13:43:54.226214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.295 [2024-10-01 13:43:54.226231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.295 [2024-10-01 13:43:54.226248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.295 [2024-10-01 13:43:54.226282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.295 [2024-10-01 13:43:54.236216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.295 [2024-10-01 13:43:54.236390] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.295 [2024-10-01 13:43:54.236426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.295 [2024-10-01 13:43:54.236446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.295 [2024-10-01 13:43:54.237387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.295 [2024-10-01 13:43:54.237637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.295 [2024-10-01 13:43:54.237675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.295 [2024-10-01 13:43:54.237694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.295 [2024-10-01 13:43:54.237777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.295 [2024-10-01 13:43:54.247170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.295 [2024-10-01 13:43:54.247304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.295 [2024-10-01 13:43:54.247339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.295 [2024-10-01 13:43:54.247357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.295 [2024-10-01 13:43:54.247391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.295 [2024-10-01 13:43:54.247424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.295 [2024-10-01 13:43:54.247442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.295 [2024-10-01 13:43:54.247456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.295 [2024-10-01 13:43:54.247488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.295 [2024-10-01 13:43:54.257508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.295 [2024-10-01 13:43:54.257755] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.295 [2024-10-01 13:43:54.257794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.295 [2024-10-01 13:43:54.257814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.295 [2024-10-01 13:43:54.257851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.295 [2024-10-01 13:43:54.257885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.295 [2024-10-01 13:43:54.257903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.295 [2024-10-01 13:43:54.257919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.295 [2024-10-01 13:43:54.257958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.295 [2024-10-01 13:43:54.269057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.295 [2024-10-01 13:43:54.269253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.295 [2024-10-01 13:43:54.269290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.295 [2024-10-01 13:43:54.269309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.295 [2024-10-01 13:43:54.269346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.295 [2024-10-01 13:43:54.269379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.295 [2024-10-01 13:43:54.269396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.295 [2024-10-01 13:43:54.269412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.295 [2024-10-01 13:43:54.269446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.295 [2024-10-01 13:43:54.279379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.295 [2024-10-01 13:43:54.279499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.295 [2024-10-01 13:43:54.279532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.295 [2024-10-01 13:43:54.279569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.295 [2024-10-01 13:43:54.279619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.295 [2024-10-01 13:43:54.280569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.295 [2024-10-01 13:43:54.280608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.295 [2024-10-01 13:43:54.280626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.295 [2024-10-01 13:43:54.280818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.295 [2024-10-01 13:43:54.290328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.295 [2024-10-01 13:43:54.290449] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.295 [2024-10-01 13:43:54.290501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.295 [2024-10-01 13:43:54.290522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.295 [2024-10-01 13:43:54.290604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.295 [2024-10-01 13:43:54.290638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.295 [2024-10-01 13:43:54.290655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.295 [2024-10-01 13:43:54.290669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.295 [2024-10-01 13:43:54.290701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.295 [2024-10-01 13:43:54.300687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.295 [2024-10-01 13:43:54.300862] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.295 [2024-10-01 13:43:54.300898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.295 [2024-10-01 13:43:54.300918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.295 [2024-10-01 13:43:54.300954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.295 [2024-10-01 13:43:54.300987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.295 [2024-10-01 13:43:54.301005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.295 [2024-10-01 13:43:54.301021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.295 [2024-10-01 13:43:54.301053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.295 [2024-10-01 13:43:54.311962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.295 [2024-10-01 13:43:54.312095] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.295 [2024-10-01 13:43:54.312129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.295 [2024-10-01 13:43:54.312148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.295 [2024-10-01 13:43:54.312182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.295 [2024-10-01 13:43:54.312215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.295 [2024-10-01 13:43:54.312233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.295 [2024-10-01 13:43:54.312247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.295 [2024-10-01 13:43:54.312280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.295 [2024-10-01 13:43:54.322315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.295 [2024-10-01 13:43:54.322451] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.296 [2024-10-01 13:43:54.322495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.296 [2024-10-01 13:43:54.322515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.296 [2024-10-01 13:43:54.322564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.296 [2024-10-01 13:43:54.322600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.296 [2024-10-01 13:43:54.322618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.296 [2024-10-01 13:43:54.322656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.296 [2024-10-01 13:43:54.323586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.296 [2024-10-01 13:43:54.333270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.296 [2024-10-01 13:43:54.333393] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.296 [2024-10-01 13:43:54.333426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.296 [2024-10-01 13:43:54.333444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.296 [2024-10-01 13:43:54.333478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.296 [2024-10-01 13:43:54.333510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.296 [2024-10-01 13:43:54.333528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.296 [2024-10-01 13:43:54.333558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.296 [2024-10-01 13:43:54.333593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.296 [2024-10-01 13:43:54.343595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.296 [2024-10-01 13:43:54.343716] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.296 [2024-10-01 13:43:54.343758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.296 [2024-10-01 13:43:54.343778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.296 [2024-10-01 13:43:54.343812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.296 [2024-10-01 13:43:54.343845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.296 [2024-10-01 13:43:54.343863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.296 [2024-10-01 13:43:54.343889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.296 [2024-10-01 13:43:54.343922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.296 [2024-10-01 13:43:54.354873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.296 [2024-10-01 13:43:54.355008] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.296 [2024-10-01 13:43:54.355041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.296 [2024-10-01 13:43:54.355059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.296 [2024-10-01 13:43:54.355094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.296 [2024-10-01 13:43:54.355127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.296 [2024-10-01 13:43:54.355144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.296 [2024-10-01 13:43:54.355158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.296 [2024-10-01 13:43:54.355190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.296 [2024-10-01 13:43:54.365235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.296 [2024-10-01 13:43:54.365379] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.296 [2024-10-01 13:43:54.365453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.296 [2024-10-01 13:43:54.365476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.296 [2024-10-01 13:43:54.366415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.296 [2024-10-01 13:43:54.366641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.296 [2024-10-01 13:43:54.366676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.296 [2024-10-01 13:43:54.366694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.296 [2024-10-01 13:43:54.366774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.296 [2024-10-01 13:43:54.376274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.296 [2024-10-01 13:43:54.376398] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.296 [2024-10-01 13:43:54.376430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.296 [2024-10-01 13:43:54.376449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.296 [2024-10-01 13:43:54.376483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.296 [2024-10-01 13:43:54.376515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.296 [2024-10-01 13:43:54.376533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.296 [2024-10-01 13:43:54.376565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.296 [2024-10-01 13:43:54.376598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.296 [2024-10-01 13:43:54.386504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.296 [2024-10-01 13:43:54.386676] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.296 [2024-10-01 13:43:54.386712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.296 [2024-10-01 13:43:54.386732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.296 [2024-10-01 13:43:54.386767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.296 [2024-10-01 13:43:54.386801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.296 [2024-10-01 13:43:54.386819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.296 [2024-10-01 13:43:54.386835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.296 [2024-10-01 13:43:54.386868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.296 [2024-10-01 13:43:54.398222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.296 [2024-10-01 13:43:54.398380] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.296 [2024-10-01 13:43:54.398423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.296 [2024-10-01 13:43:54.398443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.296 [2024-10-01 13:43:54.398479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.296 [2024-10-01 13:43:54.398577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.296 [2024-10-01 13:43:54.398605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.296 [2024-10-01 13:43:54.398620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.296 [2024-10-01 13:43:54.398655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.296 [2024-10-01 13:43:54.408517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.296 [2024-10-01 13:43:54.408685] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.296 [2024-10-01 13:43:54.408720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.296 [2024-10-01 13:43:54.408739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.296 [2024-10-01 13:43:54.409676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.296 [2024-10-01 13:43:54.409891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.296 [2024-10-01 13:43:54.409928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.296 [2024-10-01 13:43:54.409946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.296 [2024-10-01 13:43:54.410027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.296 [2024-10-01 13:43:54.419817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.296 [2024-10-01 13:43:54.419984] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.296 [2024-10-01 13:43:54.420034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.296 [2024-10-01 13:43:54.420056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.296 [2024-10-01 13:43:54.420092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.296 [2024-10-01 13:43:54.420126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.296 [2024-10-01 13:43:54.420145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.296 [2024-10-01 13:43:54.420160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.296 [2024-10-01 13:43:54.420194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.296 [2024-10-01 13:43:54.430595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.296 [2024-10-01 13:43:54.430721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.296 [2024-10-01 13:43:54.430756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.296 [2024-10-01 13:43:54.430774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.296 [2024-10-01 13:43:54.430809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.297 [2024-10-01 13:43:54.430842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.297 [2024-10-01 13:43:54.430860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.297 [2024-10-01 13:43:54.430875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.297 [2024-10-01 13:43:54.430934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.297 [2024-10-01 13:43:54.442092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.297 [2024-10-01 13:43:54.442247] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.297 [2024-10-01 13:43:54.442282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.297 [2024-10-01 13:43:54.442301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.297 [2024-10-01 13:43:54.442335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.297 [2024-10-01 13:43:54.442368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.297 [2024-10-01 13:43:54.442385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.297 [2024-10-01 13:43:54.442400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.297 [2024-10-01 13:43:54.442433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.297 [2024-10-01 13:43:54.452859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.297 [2024-10-01 13:43:54.452991] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.297 [2024-10-01 13:43:54.453025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.297 [2024-10-01 13:43:54.453044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.297 [2024-10-01 13:43:54.453079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.297 [2024-10-01 13:43:54.454010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.297 [2024-10-01 13:43:54.454050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.297 [2024-10-01 13:43:54.454068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.297 [2024-10-01 13:43:54.454267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.297 [2024-10-01 13:43:54.464039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.297 [2024-10-01 13:43:54.464168] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.297 [2024-10-01 13:43:54.464202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.297 [2024-10-01 13:43:54.464221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.297 [2024-10-01 13:43:54.464255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.297 [2024-10-01 13:43:54.464289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.297 [2024-10-01 13:43:54.464306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.297 [2024-10-01 13:43:54.464330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.297 [2024-10-01 13:43:54.464362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.297 [2024-10-01 13:43:54.474274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.297 [2024-10-01 13:43:54.474404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.297 [2024-10-01 13:43:54.474438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.297 [2024-10-01 13:43:54.474487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.297 [2024-10-01 13:43:54.474524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.297 [2024-10-01 13:43:54.474576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.297 [2024-10-01 13:43:54.474596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.297 [2024-10-01 13:43:54.474611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.297 [2024-10-01 13:43:54.474643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.297 [2024-10-01 13:43:54.486022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.297 [2024-10-01 13:43:54.486164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.297 [2024-10-01 13:43:54.486209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.297 [2024-10-01 13:43:54.486231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.297 [2024-10-01 13:43:54.486266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.297 [2024-10-01 13:43:54.486299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.297 [2024-10-01 13:43:54.486317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.297 [2024-10-01 13:43:54.486332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.297 [2024-10-01 13:43:54.486364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.297 [2024-10-01 13:43:54.497406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.297 [2024-10-01 13:43:54.497622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.297 [2024-10-01 13:43:54.497660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.297 [2024-10-01 13:43:54.497680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.297 [2024-10-01 13:43:54.497718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.297 [2024-10-01 13:43:54.497752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.297 [2024-10-01 13:43:54.497771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.297 [2024-10-01 13:43:54.497787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.297 [2024-10-01 13:43:54.497820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.297 [2024-10-01 13:43:54.509551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.297 [2024-10-01 13:43:54.510302] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.297 [2024-10-01 13:43:54.510353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.297 [2024-10-01 13:43:54.510376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.297 [2024-10-01 13:43:54.510487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.297 [2024-10-01 13:43:54.510529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.297 [2024-10-01 13:43:54.510596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.297 [2024-10-01 13:43:54.510613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.297 [2024-10-01 13:43:54.510650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.297 [2024-10-01 13:43:54.520960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.297 [2024-10-01 13:43:54.521119] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.297 [2024-10-01 13:43:54.521166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.297 [2024-10-01 13:43:54.521187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.297 [2024-10-01 13:43:54.521223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.297 [2024-10-01 13:43:54.521257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.297 [2024-10-01 13:43:54.521275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.297 [2024-10-01 13:43:54.521289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.297 [2024-10-01 13:43:54.521322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.297 [2024-10-01 13:43:54.532520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.297 [2024-10-01 13:43:54.532678] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.297 [2024-10-01 13:43:54.532721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.297 [2024-10-01 13:43:54.532743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.297 [2024-10-01 13:43:54.532779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.297 [2024-10-01 13:43:54.532812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.297 [2024-10-01 13:43:54.532830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.297 [2024-10-01 13:43:54.532844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.297 [2024-10-01 13:43:54.532877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.297 [2024-10-01 13:43:54.542897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.297 [2024-10-01 13:43:54.543028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.297 [2024-10-01 13:43:54.543062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.297 [2024-10-01 13:43:54.543081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.297 [2024-10-01 13:43:54.543115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.297 [2024-10-01 13:43:54.544091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.297 [2024-10-01 13:43:54.544136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.297 [2024-10-01 13:43:54.544154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.297 [2024-10-01 13:43:54.544370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.298 [2024-10-01 13:43:54.554048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.298 [2024-10-01 13:43:54.554185] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.298 [2024-10-01 13:43:54.554220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.298 [2024-10-01 13:43:54.554239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.298 [2024-10-01 13:43:54.554273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.298 [2024-10-01 13:43:54.554306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.298 [2024-10-01 13:43:54.554323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.298 [2024-10-01 13:43:54.554338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.298 [2024-10-01 13:43:54.554370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.298 [2024-10-01 13:43:54.564341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.298 [2024-10-01 13:43:54.564486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.298 [2024-10-01 13:43:54.564526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.298 [2024-10-01 13:43:54.564565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.298 [2024-10-01 13:43:54.564604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.298 [2024-10-01 13:43:54.564645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.298 [2024-10-01 13:43:54.564665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.298 [2024-10-01 13:43:54.564680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.298 [2024-10-01 13:43:54.564712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.298 [2024-10-01 13:43:54.575896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.298 [2024-10-01 13:43:54.576053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.298 [2024-10-01 13:43:54.576091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.298 [2024-10-01 13:43:54.576111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.298 [2024-10-01 13:43:54.576146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.298 [2024-10-01 13:43:54.576179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.298 [2024-10-01 13:43:54.576197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.298 [2024-10-01 13:43:54.576212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.298 [2024-10-01 13:43:54.576244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.298 [2024-10-01 13:43:54.586249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.298 [2024-10-01 13:43:54.586397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.298 [2024-10-01 13:43:54.586438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.298 [2024-10-01 13:43:54.586459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.298 [2024-10-01 13:43:54.587440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.298 [2024-10-01 13:43:54.587685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.298 [2024-10-01 13:43:54.587725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.298 [2024-10-01 13:43:54.587743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.298 [2024-10-01 13:43:54.587826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.298 [2024-10-01 13:43:54.597359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.298 [2024-10-01 13:43:54.597564] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.298 [2024-10-01 13:43:54.597603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.298 [2024-10-01 13:43:54.597624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.298 [2024-10-01 13:43:54.597661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.298 [2024-10-01 13:43:54.597695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.298 [2024-10-01 13:43:54.597713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.298 [2024-10-01 13:43:54.597729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.298 [2024-10-01 13:43:54.597763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.298 [2024-10-01 13:43:54.607699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.298 [2024-10-01 13:43:54.607836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.298 [2024-10-01 13:43:54.607886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.298 [2024-10-01 13:43:54.607908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.298 [2024-10-01 13:43:54.607945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.298 [2024-10-01 13:43:54.607978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.298 [2024-10-01 13:43:54.607996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.298 [2024-10-01 13:43:54.608011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.298 [2024-10-01 13:43:54.608043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.298 [2024-10-01 13:43:54.618968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.298 [2024-10-01 13:43:54.619106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.298 [2024-10-01 13:43:54.619150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.298 [2024-10-01 13:43:54.619171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.298 [2024-10-01 13:43:54.619207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.298 [2024-10-01 13:43:54.619256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.298 [2024-10-01 13:43:54.619279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.298 [2024-10-01 13:43:54.619323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.298 [2024-10-01 13:43:54.619360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.298 [2024-10-01 13:43:54.629273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.298 [2024-10-01 13:43:54.629400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.298 [2024-10-01 13:43:54.629443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.298 [2024-10-01 13:43:54.629464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.298 [2024-10-01 13:43:54.629499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.298 [2024-10-01 13:43:54.630438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.298 [2024-10-01 13:43:54.630479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.298 [2024-10-01 13:43:54.630497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.298 [2024-10-01 13:43:54.630705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.298 [2024-10-01 13:43:54.640340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.298 [2024-10-01 13:43:54.640527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.298 [2024-10-01 13:43:54.640602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.298 [2024-10-01 13:43:54.640627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.298 [2024-10-01 13:43:54.640668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.298 [2024-10-01 13:43:54.640702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.298 [2024-10-01 13:43:54.640719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.298 [2024-10-01 13:43:54.640735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.298 [2024-10-01 13:43:54.640767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.298 [2024-10-01 13:43:54.650602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.298 [2024-10-01 13:43:54.650727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.298 [2024-10-01 13:43:54.650789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.298 [2024-10-01 13:43:54.650811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.298 [2024-10-01 13:43:54.650847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.298 [2024-10-01 13:43:54.650880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.298 [2024-10-01 13:43:54.650898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.298 [2024-10-01 13:43:54.650913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.298 [2024-10-01 13:43:54.650945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.298 [2024-10-01 13:43:54.661837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.298 [2024-10-01 13:43:54.661973] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.298 [2024-10-01 13:43:54.662041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.299 [2024-10-01 13:43:54.662065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.299 7833.50 IOPS, 30.60 MiB/s [2024-10-01 13:43:54.665017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.299 [2024-10-01 13:43:54.666001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.299 [2024-10-01 13:43:54.666062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.299 [2024-10-01 13:43:54.666093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.299 [2024-10-01 13:43:54.667176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.299 [2024-10-01 13:43:54.672305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.299 [2024-10-01 13:43:54.672431] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.299 [2024-10-01 13:43:54.672465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.299 [2024-10-01 13:43:54.672484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.299 [2024-10-01 13:43:54.672518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.299 [2024-10-01 13:43:54.672568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.299 [2024-10-01 13:43:54.672589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.299 [2024-10-01 13:43:54.672603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.299 [2024-10-01 13:43:54.672650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.299 [2024-10-01 13:43:54.684074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.299 [2024-10-01 13:43:54.684263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.299 [2024-10-01 13:43:54.684309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.299 [2024-10-01 13:43:54.684331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.299 [2024-10-01 13:43:54.684367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.299 [2024-10-01 13:43:54.684401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.299 [2024-10-01 13:43:54.684419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.299 [2024-10-01 13:43:54.684433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.299 [2024-10-01 13:43:54.684467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.299 [2024-10-01 13:43:54.694479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.299 [2024-10-01 13:43:54.694641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.299 [2024-10-01 13:43:54.694684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.299 [2024-10-01 13:43:54.694705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.299 [2024-10-01 13:43:54.694739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.299 [2024-10-01 13:43:54.694798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.299 [2024-10-01 13:43:54.694818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.299 [2024-10-01 13:43:54.694833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.299 [2024-10-01 13:43:54.694866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.299 [2024-10-01 13:43:54.705854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.299 [2024-10-01 13:43:54.706003] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.299 [2024-10-01 13:43:54.706050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.299 [2024-10-01 13:43:54.706071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.299 [2024-10-01 13:43:54.706106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.299 [2024-10-01 13:43:54.706139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.299 [2024-10-01 13:43:54.706157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.299 [2024-10-01 13:43:54.706171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.299 [2024-10-01 13:43:54.706203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.299 [2024-10-01 13:43:54.716294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.299 [2024-10-01 13:43:54.716430] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.299 [2024-10-01 13:43:54.716464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.299 [2024-10-01 13:43:54.716483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.299 [2024-10-01 13:43:54.716517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.299 [2024-10-01 13:43:54.717464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.299 [2024-10-01 13:43:54.717504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.299 [2024-10-01 13:43:54.717522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.299 [2024-10-01 13:43:54.717757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.299 [2024-10-01 13:43:54.727484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.299 [2024-10-01 13:43:54.727644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.299 [2024-10-01 13:43:54.727680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.299 [2024-10-01 13:43:54.727699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.299 [2024-10-01 13:43:54.727734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.299 [2024-10-01 13:43:54.727767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.299 [2024-10-01 13:43:54.727785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.299 [2024-10-01 13:43:54.727799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.299 [2024-10-01 13:43:54.727862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.299 [2024-10-01 13:43:54.737831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.299 [2024-10-01 13:43:54.737989] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.299 [2024-10-01 13:43:54.738024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.299 [2024-10-01 13:43:54.738043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.299 [2024-10-01 13:43:54.738080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.299 [2024-10-01 13:43:54.738113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.299 [2024-10-01 13:43:54.738132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.299 [2024-10-01 13:43:54.738147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.299 [2024-10-01 13:43:54.738180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.299 [2024-10-01 13:43:54.749377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.299 [2024-10-01 13:43:54.749635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.299 [2024-10-01 13:43:54.749695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.299 [2024-10-01 13:43:54.749731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.299 [2024-10-01 13:43:54.749791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.299 [2024-10-01 13:43:54.749849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.299 [2024-10-01 13:43:54.749884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.299 [2024-10-01 13:43:54.749912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.299 [2024-10-01 13:43:54.749966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.299 [2024-10-01 13:43:54.760290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.299 [2024-10-01 13:43:54.760458] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.299 [2024-10-01 13:43:54.760495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.299 [2024-10-01 13:43:54.760515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.299 [2024-10-01 13:43:54.761463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.299 [2024-10-01 13:43:54.761728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.299 [2024-10-01 13:43:54.761768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.299 [2024-10-01 13:43:54.761787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.299 [2024-10-01 13:43:54.761870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.299 [2024-10-01 13:43:54.771382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.299 [2024-10-01 13:43:54.771523] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.299 [2024-10-01 13:43:54.771576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.299 [2024-10-01 13:43:54.771627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.299 [2024-10-01 13:43:54.771667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.299 [2024-10-01 13:43:54.771701] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.300 [2024-10-01 13:43:54.771718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.300 [2024-10-01 13:43:54.771732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.300 [2024-10-01 13:43:54.771766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.300 [2024-10-01 13:43:54.781600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.300 [2024-10-01 13:43:54.781748] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.300 [2024-10-01 13:43:54.781785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.300 [2024-10-01 13:43:54.781804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.300 [2024-10-01 13:43:54.781840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.300 [2024-10-01 13:43:54.781873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.300 [2024-10-01 13:43:54.781892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.300 [2024-10-01 13:43:54.781906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.300 [2024-10-01 13:43:54.781938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.300 [2024-10-01 13:43:54.793015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.300 [2024-10-01 13:43:54.793180] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.300 [2024-10-01 13:43:54.793215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.300 [2024-10-01 13:43:54.793234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.300 [2024-10-01 13:43:54.793269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.300 [2024-10-01 13:43:54.793302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.300 [2024-10-01 13:43:54.793320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.300 [2024-10-01 13:43:54.793335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.300 [2024-10-01 13:43:54.793367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.300 [2024-10-01 13:43:54.804261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.300 [2024-10-01 13:43:54.804393] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.300 [2024-10-01 13:43:54.804428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.300 [2024-10-01 13:43:54.804447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.300 [2024-10-01 13:43:54.804481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.300 [2024-10-01 13:43:54.804514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.300 [2024-10-01 13:43:54.804574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.300 [2024-10-01 13:43:54.804592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.300 [2024-10-01 13:43:54.805558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.300 [2024-10-01 13:43:54.815322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.300 [2024-10-01 13:43:54.815455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.300 [2024-10-01 13:43:54.815490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.300 [2024-10-01 13:43:54.815508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.300 [2024-10-01 13:43:54.815561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.300 [2024-10-01 13:43:54.815598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.300 [2024-10-01 13:43:54.815617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.300 [2024-10-01 13:43:54.815632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.300 [2024-10-01 13:43:54.815664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.300 [2024-10-01 13:43:54.825632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.300 [2024-10-01 13:43:54.825770] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.300 [2024-10-01 13:43:54.825812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.300 [2024-10-01 13:43:54.825831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.300 [2024-10-01 13:43:54.825874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.300 [2024-10-01 13:43:54.825911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.300 [2024-10-01 13:43:54.825929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.300 [2024-10-01 13:43:54.825944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.300 [2024-10-01 13:43:54.825977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.300 [2024-10-01 13:43:54.836969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.300 [2024-10-01 13:43:54.837109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.300 [2024-10-01 13:43:54.837144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.300 [2024-10-01 13:43:54.837163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.300 [2024-10-01 13:43:54.837212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.300 [2024-10-01 13:43:54.837249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.300 [2024-10-01 13:43:54.837267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.300 [2024-10-01 13:43:54.837293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.300 [2024-10-01 13:43:54.837325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.300 [2024-10-01 13:43:54.848433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.300 [2024-10-01 13:43:54.848587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.300 [2024-10-01 13:43:54.848623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.300 [2024-10-01 13:43:54.848643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.300 [2024-10-01 13:43:54.848679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.300 [2024-10-01 13:43:54.848712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.300 [2024-10-01 13:43:54.848730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.300 [2024-10-01 13:43:54.848744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.300 [2024-10-01 13:43:54.848777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.300 [2024-10-01 13:43:54.860180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.300 [2024-10-01 13:43:54.860940] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.300 [2024-10-01 13:43:54.860987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.300 [2024-10-01 13:43:54.861008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.301 [2024-10-01 13:43:54.861102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.301 [2024-10-01 13:43:54.861143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.301 [2024-10-01 13:43:54.861161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.301 [2024-10-01 13:43:54.861177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.301 [2024-10-01 13:43:54.861210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.301 [2024-10-01 13:43:54.870290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.301 [2024-10-01 13:43:54.870440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.301 [2024-10-01 13:43:54.870475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.301 [2024-10-01 13:43:54.870495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.301 [2024-10-01 13:43:54.870530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.301 [2024-10-01 13:43:54.870613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.301 [2024-10-01 13:43:54.870637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.301 [2024-10-01 13:43:54.870652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.301 [2024-10-01 13:43:54.870687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.301 [2024-10-01 13:43:54.880407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.301 [2024-10-01 13:43:54.880558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.301 [2024-10-01 13:43:54.880595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.301 [2024-10-01 13:43:54.880614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.301 [2024-10-01 13:43:54.880683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.301 [2024-10-01 13:43:54.880718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.301 [2024-10-01 13:43:54.880736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.301 [2024-10-01 13:43:54.880751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.301 [2024-10-01 13:43:54.880783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.301 [2024-10-01 13:43:54.891885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.301 [2024-10-01 13:43:54.892019] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.301 [2024-10-01 13:43:54.892082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.301 [2024-10-01 13:43:54.892111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.301 [2024-10-01 13:43:54.892150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.301 [2024-10-01 13:43:54.892184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.301 [2024-10-01 13:43:54.892202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.301 [2024-10-01 13:43:54.892216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.301 [2024-10-01 13:43:54.892250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.301 [2024-10-01 13:43:54.903121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.301 [2024-10-01 13:43:54.903308] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.301 [2024-10-01 13:43:54.903378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.301 [2024-10-01 13:43:54.903417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.301 [2024-10-01 13:43:54.903476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.301 [2024-10-01 13:43:54.903574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.301 [2024-10-01 13:43:54.903619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.301 [2024-10-01 13:43:54.903648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.301 [2024-10-01 13:43:54.905250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.301 [2024-10-01 13:43:54.913518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.301 [2024-10-01 13:43:54.913679] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.301 [2024-10-01 13:43:54.913727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.301 [2024-10-01 13:43:54.913762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.301 [2024-10-01 13:43:54.914971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.301 [2024-10-01 13:43:54.915839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.301 [2024-10-01 13:43:54.915921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.301 [2024-10-01 13:43:54.916005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.301 [2024-10-01 13:43:54.916187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.301 [2024-10-01 13:43:54.925292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.301 [2024-10-01 13:43:54.925477] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.301 [2024-10-01 13:43:54.925530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.301 [2024-10-01 13:43:54.925590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.301 [2024-10-01 13:43:54.925648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.301 [2024-10-01 13:43:54.925700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.301 [2024-10-01 13:43:54.925730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.301 [2024-10-01 13:43:54.925754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.301 [2024-10-01 13:43:54.925811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.301 [2024-10-01 13:43:54.935490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.301 [2024-10-01 13:43:54.935639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.301 [2024-10-01 13:43:54.935675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.301 [2024-10-01 13:43:54.935695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.301 [2024-10-01 13:43:54.935743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.301 [2024-10-01 13:43:54.935785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.301 [2024-10-01 13:43:54.935804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.301 [2024-10-01 13:43:54.935819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.301 [2024-10-01 13:43:54.935853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.301 [2024-10-01 13:43:54.945609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.301 [2024-10-01 13:43:54.945736] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.301 [2024-10-01 13:43:54.945771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.301 [2024-10-01 13:43:54.945789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.301 [2024-10-01 13:43:54.945823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.301 [2024-10-01 13:43:54.945856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.301 [2024-10-01 13:43:54.945875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.301 [2024-10-01 13:43:54.945889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.301 [2024-10-01 13:43:54.945921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.301 [2024-10-01 13:43:54.955708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.301 [2024-10-01 13:43:54.955901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.301 [2024-10-01 13:43:54.955937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.301 [2024-10-01 13:43:54.955956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.301 [2024-10-01 13:43:54.957279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.301 [2024-10-01 13:43:54.957507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.301 [2024-10-01 13:43:54.957566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.301 [2024-10-01 13:43:54.957588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.301 [2024-10-01 13:43:54.958382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.301 [2024-10-01 13:43:54.967259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.301 [2024-10-01 13:43:54.967401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.301 [2024-10-01 13:43:54.967450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.301 [2024-10-01 13:43:54.967476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.301 [2024-10-01 13:43:54.967513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.301 [2024-10-01 13:43:54.967563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.302 [2024-10-01 13:43:54.967594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.302 [2024-10-01 13:43:54.967620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.302 [2024-10-01 13:43:54.967657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.302 [2024-10-01 13:43:54.977379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.302 [2024-10-01 13:43:54.977515] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.302 [2024-10-01 13:43:54.977572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.302 [2024-10-01 13:43:54.977608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.302 [2024-10-01 13:43:54.978978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.302 [2024-10-01 13:43:54.980015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.302 [2024-10-01 13:43:54.980067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.302 [2024-10-01 13:43:54.980101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.302 [2024-10-01 13:43:54.980274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.302 [2024-10-01 13:43:54.987483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.302 [2024-10-01 13:43:54.987672] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.302 [2024-10-01 13:43:54.987721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.302 [2024-10-01 13:43:54.987743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.302 [2024-10-01 13:43:54.987779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.302 [2024-10-01 13:43:54.989200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.302 [2024-10-01 13:43:54.989246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.302 [2024-10-01 13:43:54.989266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.302 [2024-10-01 13:43:54.989528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.302 [2024-10-01 13:43:54.998093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.302 [2024-10-01 13:43:54.998242] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.302 [2024-10-01 13:43:54.998290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.302 [2024-10-01 13:43:54.998313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.302 [2024-10-01 13:43:54.998352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.302 [2024-10-01 13:43:54.998419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.302 [2024-10-01 13:43:54.998448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.302 [2024-10-01 13:43:54.998464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.302 [2024-10-01 13:43:54.998499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.302 [2024-10-01 13:43:55.008981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.302 [2024-10-01 13:43:55.009131] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.302 [2024-10-01 13:43:55.009170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.302 [2024-10-01 13:43:55.009203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.302 [2024-10-01 13:43:55.009249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.302 [2024-10-01 13:43:55.010754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.302 [2024-10-01 13:43:55.010810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.302 [2024-10-01 13:43:55.010842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.302 [2024-10-01 13:43:55.011880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.302 [2024-10-01 13:43:55.020815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.302 [2024-10-01 13:43:55.021141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.302 [2024-10-01 13:43:55.021203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.302 [2024-10-01 13:43:55.021243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.302 [2024-10-01 13:43:55.021404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.302 [2024-10-01 13:43:55.021486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.302 [2024-10-01 13:43:55.021521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.302 [2024-10-01 13:43:55.021570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.302 [2024-10-01 13:43:55.023303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.302 [2024-10-01 13:43:55.030946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.302 [2024-10-01 13:43:55.031080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.302 [2024-10-01 13:43:55.031115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.302 [2024-10-01 13:43:55.031134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.302 [2024-10-01 13:43:55.031169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.302 [2024-10-01 13:43:55.031202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.302 [2024-10-01 13:43:55.031220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.302 [2024-10-01 13:43:55.031235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.302 [2024-10-01 13:43:55.031268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.302 [2024-10-01 13:43:55.041475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.302 [2024-10-01 13:43:55.042859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.302 [2024-10-01 13:43:55.042911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.302 [2024-10-01 13:43:55.042933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.302 [2024-10-01 13:43:55.043847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.302 [2024-10-01 13:43:55.044143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.302 [2024-10-01 13:43:55.044182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.302 [2024-10-01 13:43:55.044201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.302 [2024-10-01 13:43:55.044240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.302 [2024-10-01 13:43:55.051757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.302 [2024-10-01 13:43:55.051911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.302 [2024-10-01 13:43:55.051947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.302 [2024-10-01 13:43:55.051967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.302 [2024-10-01 13:43:55.052003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.302 [2024-10-01 13:43:55.052037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.302 [2024-10-01 13:43:55.052055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.302 [2024-10-01 13:43:55.052080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.302 [2024-10-01 13:43:55.052134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.302 [2024-10-01 13:43:55.063115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.302 [2024-10-01 13:43:55.063260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.302 [2024-10-01 13:43:55.063295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.302 [2024-10-01 13:43:55.063338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.302 [2024-10-01 13:43:55.063382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.302 [2024-10-01 13:43:55.063436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.302 [2024-10-01 13:43:55.063458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.302 [2024-10-01 13:43:55.063473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.302 [2024-10-01 13:43:55.063525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.302 [2024-10-01 13:43:55.074915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.302 [2024-10-01 13:43:55.075067] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.302 [2024-10-01 13:43:55.075111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.302 [2024-10-01 13:43:55.075132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.302 [2024-10-01 13:43:55.075169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.302 [2024-10-01 13:43:55.075202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.302 [2024-10-01 13:43:55.075231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.302 [2024-10-01 13:43:55.075260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.303 [2024-10-01 13:43:55.075313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.303 [2024-10-01 13:43:55.086652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.303 [2024-10-01 13:43:55.086844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.303 [2024-10-01 13:43:55.086893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.303 [2024-10-01 13:43:55.086925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.303 [2024-10-01 13:43:55.087988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.303 [2024-10-01 13:43:55.088236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.303 [2024-10-01 13:43:55.088275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.303 [2024-10-01 13:43:55.088294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.303 [2024-10-01 13:43:55.088377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.303 [2024-10-01 13:43:55.096779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.303 [2024-10-01 13:43:55.096915] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.303 [2024-10-01 13:43:55.096950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.303 [2024-10-01 13:43:55.096968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.303 [2024-10-01 13:43:55.097806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.303 [2024-10-01 13:43:55.098028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.303 [2024-10-01 13:43:55.098088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.303 [2024-10-01 13:43:55.098108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.303 [2024-10-01 13:43:55.098237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.303 [2024-10-01 13:43:55.107039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.303 [2024-10-01 13:43:55.107164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.303 [2024-10-01 13:43:55.107198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.303 [2024-10-01 13:43:55.107217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.303 [2024-10-01 13:43:55.107251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.303 [2024-10-01 13:43:55.107283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.303 [2024-10-01 13:43:55.107301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.303 [2024-10-01 13:43:55.107316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.303 [2024-10-01 13:43:55.107349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.303 [2024-10-01 13:43:55.117139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.303 [2024-10-01 13:43:55.117271] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.303 [2024-10-01 13:43:55.117306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.303 [2024-10-01 13:43:55.117325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.303 [2024-10-01 13:43:55.117360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.303 [2024-10-01 13:43:55.117392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.303 [2024-10-01 13:43:55.117410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.303 [2024-10-01 13:43:55.117424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.303 [2024-10-01 13:43:55.117457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.303 [2024-10-01 13:43:55.127714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.303 [2024-10-01 13:43:55.127856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.303 [2024-10-01 13:43:55.127904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.303 [2024-10-01 13:43:55.127923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.303 [2024-10-01 13:43:55.127958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.303 [2024-10-01 13:43:55.127991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.303 [2024-10-01 13:43:55.128009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.303 [2024-10-01 13:43:55.128024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.303 [2024-10-01 13:43:55.128056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.303 [2024-10-01 13:43:55.138085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.303 [2024-10-01 13:43:55.138217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.303 [2024-10-01 13:43:55.138251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.303 [2024-10-01 13:43:55.138270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.303 [2024-10-01 13:43:55.139585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.303 [2024-10-01 13:43:55.139849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.303 [2024-10-01 13:43:55.139901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.303 [2024-10-01 13:43:55.139920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.303 [2024-10-01 13:43:55.139958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.303 [2024-10-01 13:43:55.148236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.303 [2024-10-01 13:43:55.148363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.303 [2024-10-01 13:43:55.148397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.303 [2024-10-01 13:43:55.148416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.303 [2024-10-01 13:43:55.148449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.303 [2024-10-01 13:43:55.148482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.303 [2024-10-01 13:43:55.148503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.303 [2024-10-01 13:43:55.148524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.303 [2024-10-01 13:43:55.148574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.303 [2024-10-01 13:43:55.158566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.303 [2024-10-01 13:43:55.158788] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.303 [2024-10-01 13:43:55.158827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.303 [2024-10-01 13:43:55.158848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.303 [2024-10-01 13:43:55.158887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.303 [2024-10-01 13:43:55.158922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.303 [2024-10-01 13:43:55.158939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.303 [2024-10-01 13:43:55.158955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.303 [2024-10-01 13:43:55.158988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.303 [2024-10-01 13:43:55.169746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.303 [2024-10-01 13:43:55.169898] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.303 [2024-10-01 13:43:55.169933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.303 [2024-10-01 13:43:55.169953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.303 [2024-10-01 13:43:55.170019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.303 [2024-10-01 13:43:55.170070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.303 [2024-10-01 13:43:55.170093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.303 [2024-10-01 13:43:55.170108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.303 [2024-10-01 13:43:55.170142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.303 [2024-10-01 13:43:55.179856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.303 [2024-10-01 13:43:55.180005] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.303 [2024-10-01 13:43:55.180039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.303 [2024-10-01 13:43:55.180058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.303 [2024-10-01 13:43:55.181010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.303 [2024-10-01 13:43:55.181225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.303 [2024-10-01 13:43:55.181261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.303 [2024-10-01 13:43:55.181279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.303 [2024-10-01 13:43:55.181359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.303 [2024-10-01 13:43:55.190649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.304 [2024-10-01 13:43:55.190846] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.304 [2024-10-01 13:43:55.190883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.304 [2024-10-01 13:43:55.190904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.304 [2024-10-01 13:43:55.190941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.304 [2024-10-01 13:43:55.190975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.304 [2024-10-01 13:43:55.190993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.304 [2024-10-01 13:43:55.191009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.304 [2024-10-01 13:43:55.191042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.304 [2024-10-01 13:43:55.201049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.304 [2024-10-01 13:43:55.201174] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.304 [2024-10-01 13:43:55.201208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.304 [2024-10-01 13:43:55.201227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.304 [2024-10-01 13:43:55.201261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.304 [2024-10-01 13:43:55.201294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.304 [2024-10-01 13:43:55.201320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.304 [2024-10-01 13:43:55.201368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.304 [2024-10-01 13:43:55.201404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.304 [2024-10-01 13:43:55.212250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.304 [2024-10-01 13:43:55.212471] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.304 [2024-10-01 13:43:55.212512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.304 [2024-10-01 13:43:55.212532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.304 [2024-10-01 13:43:55.212610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.304 [2024-10-01 13:43:55.212649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.304 [2024-10-01 13:43:55.212668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.304 [2024-10-01 13:43:55.212684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.304 [2024-10-01 13:43:55.212718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.304 [2024-10-01 13:43:55.222548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.304 [2024-10-01 13:43:55.222742] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.304 [2024-10-01 13:43:55.222779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.304 [2024-10-01 13:43:55.222808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.304 [2024-10-01 13:43:55.223762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.304 [2024-10-01 13:43:55.223995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.304 [2024-10-01 13:43:55.224032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.304 [2024-10-01 13:43:55.224052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.304 [2024-10-01 13:43:55.225348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.304 [2024-10-01 13:43:55.233368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.304 [2024-10-01 13:43:55.233513] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.304 [2024-10-01 13:43:55.233564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.304 [2024-10-01 13:43:55.233586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.304 [2024-10-01 13:43:55.233623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.304 [2024-10-01 13:43:55.233678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.304 [2024-10-01 13:43:55.233711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.304 [2024-10-01 13:43:55.233740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.304 [2024-10-01 13:43:55.233797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.304 [2024-10-01 13:43:55.243479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.304 [2024-10-01 13:43:55.243664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.304 [2024-10-01 13:43:55.243700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.304 [2024-10-01 13:43:55.243721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.304 [2024-10-01 13:43:55.243757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.304 [2024-10-01 13:43:55.243790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.304 [2024-10-01 13:43:55.243808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.304 [2024-10-01 13:43:55.243823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.304 [2024-10-01 13:43:55.243856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.304 [2024-10-01 13:43:55.254609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.304 [2024-10-01 13:43:55.254751] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.304 [2024-10-01 13:43:55.254786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.304 [2024-10-01 13:43:55.254805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.304 [2024-10-01 13:43:55.254856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.304 [2024-10-01 13:43:55.254893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.304 [2024-10-01 13:43:55.254912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.304 [2024-10-01 13:43:55.254926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.304 [2024-10-01 13:43:55.254958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.304 [2024-10-01 13:43:55.264797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.304 [2024-10-01 13:43:55.264930] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.304 [2024-10-01 13:43:55.264964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.304 [2024-10-01 13:43:55.264983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.304 [2024-10-01 13:43:55.265927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.304 [2024-10-01 13:43:55.266162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.304 [2024-10-01 13:43:55.266202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.304 [2024-10-01 13:43:55.266220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.304 [2024-10-01 13:43:55.266302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.304 [2024-10-01 13:43:55.275496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.304 [2024-10-01 13:43:55.275637] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.304 [2024-10-01 13:43:55.275672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.304 [2024-10-01 13:43:55.275691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.304 [2024-10-01 13:43:55.275725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.304 [2024-10-01 13:43:55.275803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.304 [2024-10-01 13:43:55.275826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.304 [2024-10-01 13:43:55.275841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.304 [2024-10-01 13:43:55.275887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.304 [2024-10-01 13:43:55.285695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.305 [2024-10-01 13:43:55.285824] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.305 [2024-10-01 13:43:55.285859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.305 [2024-10-01 13:43:55.285878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.305 [2024-10-01 13:43:55.285912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.305 [2024-10-01 13:43:55.285944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.305 [2024-10-01 13:43:55.285962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.305 [2024-10-01 13:43:55.285976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.305 [2024-10-01 13:43:55.286009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.305 [2024-10-01 13:43:55.296707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.305 [2024-10-01 13:43:55.296833] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.305 [2024-10-01 13:43:55.296867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.305 [2024-10-01 13:43:55.296886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.305 [2024-10-01 13:43:55.296922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.305 [2024-10-01 13:43:55.296983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.305 [2024-10-01 13:43:55.297007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.305 [2024-10-01 13:43:55.297022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.305 [2024-10-01 13:43:55.297055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.305 [2024-10-01 13:43:55.306864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.305 [2024-10-01 13:43:55.307063] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.305 [2024-10-01 13:43:55.307100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.305 [2024-10-01 13:43:55.307120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.305 [2024-10-01 13:43:55.308097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.305 [2024-10-01 13:43:55.308328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.305 [2024-10-01 13:43:55.308365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.305 [2024-10-01 13:43:55.308384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.305 [2024-10-01 13:43:55.308550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.305 [2024-10-01 13:43:55.317723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.305 [2024-10-01 13:43:55.317878] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.305 [2024-10-01 13:43:55.317914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.305 [2024-10-01 13:43:55.317933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.305 [2024-10-01 13:43:55.317969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.305 [2024-10-01 13:43:55.318017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.305 [2024-10-01 13:43:55.318040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.305 [2024-10-01 13:43:55.318056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.305 [2024-10-01 13:43:55.318089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.305 [2024-10-01 13:43:55.327861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.305 [2024-10-01 13:43:55.328085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.305 [2024-10-01 13:43:55.328123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.305 [2024-10-01 13:43:55.328144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.305 [2024-10-01 13:43:55.328181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.305 [2024-10-01 13:43:55.328215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.305 [2024-10-01 13:43:55.328232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.305 [2024-10-01 13:43:55.328248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.305 [2024-10-01 13:43:55.328289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.305 [2024-10-01 13:43:55.339401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.305 [2024-10-01 13:43:55.339603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.305 [2024-10-01 13:43:55.339641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.305 [2024-10-01 13:43:55.339661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.305 [2024-10-01 13:43:55.339700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.305 [2024-10-01 13:43:55.339733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.305 [2024-10-01 13:43:55.339752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.305 [2024-10-01 13:43:55.339767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.305 [2024-10-01 13:43:55.339800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.305 [2024-10-01 13:43:55.349598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.305 [2024-10-01 13:43:55.349774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.305 [2024-10-01 13:43:55.349810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.305 [2024-10-01 13:43:55.349861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.305 [2024-10-01 13:43:55.350819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.305 [2024-10-01 13:43:55.351053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.305 [2024-10-01 13:43:55.351089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.305 [2024-10-01 13:43:55.351109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.305 [2024-10-01 13:43:55.352412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.305 [2024-10-01 13:43:55.360407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.305 [2024-10-01 13:43:55.360558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.305 [2024-10-01 13:43:55.360594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.305 [2024-10-01 13:43:55.360613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.305 [2024-10-01 13:43:55.360662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.305 [2024-10-01 13:43:55.360700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.305 [2024-10-01 13:43:55.360718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.305 [2024-10-01 13:43:55.360733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.305 [2024-10-01 13:43:55.360765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.305 [2024-10-01 13:43:55.370517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.305 [2024-10-01 13:43:55.370661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.305 [2024-10-01 13:43:55.370695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.305 [2024-10-01 13:43:55.370713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.305 [2024-10-01 13:43:55.370748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.305 [2024-10-01 13:43:55.370780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.305 [2024-10-01 13:43:55.370798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.305 [2024-10-01 13:43:55.370812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.305 [2024-10-01 13:43:55.370843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.305 [2024-10-01 13:43:55.381621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.305 [2024-10-01 13:43:55.381752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.305 [2024-10-01 13:43:55.381790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.305 [2024-10-01 13:43:55.381811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.305 [2024-10-01 13:43:55.381845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.305 [2024-10-01 13:43:55.381893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.305 [2024-10-01 13:43:55.381945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.305 [2024-10-01 13:43:55.381962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.305 [2024-10-01 13:43:55.381996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.305 [2024-10-01 13:43:55.391751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.305 [2024-10-01 13:43:55.391932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.305 [2024-10-01 13:43:55.391969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.306 [2024-10-01 13:43:55.391989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.306 [2024-10-01 13:43:55.392949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.306 [2024-10-01 13:43:55.393189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.306 [2024-10-01 13:43:55.393226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.306 [2024-10-01 13:43:55.393246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.306 [2024-10-01 13:43:55.393328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.306 [2024-10-01 13:43:55.402611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.306 [2024-10-01 13:43:55.402792] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.306 [2024-10-01 13:43:55.402831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.306 [2024-10-01 13:43:55.402851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.306 [2024-10-01 13:43:55.402886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.306 [2024-10-01 13:43:55.402920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.306 [2024-10-01 13:43:55.402938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.306 [2024-10-01 13:43:55.402953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.306 [2024-10-01 13:43:55.402988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.306 [2024-10-01 13:43:55.412745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.306 [2024-10-01 13:43:55.412875] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.306 [2024-10-01 13:43:55.412908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.306 [2024-10-01 13:43:55.412927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.306 [2024-10-01 13:43:55.412960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.306 [2024-10-01 13:43:55.412993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.306 [2024-10-01 13:43:55.413011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.306 [2024-10-01 13:43:55.413026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.306 [2024-10-01 13:43:55.413058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.306 [2024-10-01 13:43:55.423746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.306 [2024-10-01 13:43:55.423933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.306 [2024-10-01 13:43:55.423975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.306 [2024-10-01 13:43:55.423996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.306 [2024-10-01 13:43:55.424035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.306 [2024-10-01 13:43:55.424069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.306 [2024-10-01 13:43:55.424087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.306 [2024-10-01 13:43:55.424102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.306 [2024-10-01 13:43:55.424136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.306 [2024-10-01 13:43:55.433934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.306 [2024-10-01 13:43:55.434105] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.306 [2024-10-01 13:43:55.434142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.306 [2024-10-01 13:43:55.434162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.306 [2024-10-01 13:43:55.435128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.306 [2024-10-01 13:43:55.435346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.306 [2024-10-01 13:43:55.435391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.306 [2024-10-01 13:43:55.435411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.306 [2024-10-01 13:43:55.435494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.306 [2024-10-01 13:43:55.444792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.306 [2024-10-01 13:43:55.444984] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.306 [2024-10-01 13:43:55.445030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.306 [2024-10-01 13:43:55.445052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.306 [2024-10-01 13:43:55.446006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.306 [2024-10-01 13:43:55.446670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.306 [2024-10-01 13:43:55.446710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.306 [2024-10-01 13:43:55.446729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.306 [2024-10-01 13:43:55.446851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.306 [2024-10-01 13:43:55.455131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.306 [2024-10-01 13:43:55.455323] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.306 [2024-10-01 13:43:55.455359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.306 [2024-10-01 13:43:55.455379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.306 [2024-10-01 13:43:55.456242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.306 [2024-10-01 13:43:55.456467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.306 [2024-10-01 13:43:55.456504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.306 [2024-10-01 13:43:55.456524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.306 [2024-10-01 13:43:55.456586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.306 [2024-10-01 13:43:55.465363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.306 [2024-10-01 13:43:55.465573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.306 [2024-10-01 13:43:55.465612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.306 [2024-10-01 13:43:55.465632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.306 [2024-10-01 13:43:55.465672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.306 [2024-10-01 13:43:55.465706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.306 [2024-10-01 13:43:55.465724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.306 [2024-10-01 13:43:55.465741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.306 [2024-10-01 13:43:55.465774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.306 [2024-10-01 13:43:55.475619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.306 [2024-10-01 13:43:55.475765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.306 [2024-10-01 13:43:55.475800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.306 [2024-10-01 13:43:55.475820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.306 [2024-10-01 13:43:55.475856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.306 [2024-10-01 13:43:55.475905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.306 [2024-10-01 13:43:55.475924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.306 [2024-10-01 13:43:55.475938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.306 [2024-10-01 13:43:55.475970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.306 [2024-10-01 13:43:55.486814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.306 [2024-10-01 13:43:55.487003] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.306 [2024-10-01 13:43:55.487039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.306 [2024-10-01 13:43:55.487059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.306 [2024-10-01 13:43:55.487095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.306 [2024-10-01 13:43:55.487129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.306 [2024-10-01 13:43:55.487148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.306 [2024-10-01 13:43:55.487199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.306 [2024-10-01 13:43:55.487246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.306 [2024-10-01 13:43:55.497131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.306 [2024-10-01 13:43:55.497333] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.306 [2024-10-01 13:43:55.497369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.306 [2024-10-01 13:43:55.497389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.306 [2024-10-01 13:43:55.498353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.306 [2024-10-01 13:43:55.498598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.307 [2024-10-01 13:43:55.498635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.307 [2024-10-01 13:43:55.498654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.307 [2024-10-01 13:43:55.498737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.307 [2024-10-01 13:43:55.508309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.307 [2024-10-01 13:43:55.508516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.307 [2024-10-01 13:43:55.508582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.307 [2024-10-01 13:43:55.508606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.307 [2024-10-01 13:43:55.508645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.307 [2024-10-01 13:43:55.508681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.307 [2024-10-01 13:43:55.508699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.307 [2024-10-01 13:43:55.508715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.307 [2024-10-01 13:43:55.508749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.307 [2024-10-01 13:43:55.518714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.307 [2024-10-01 13:43:55.518852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.307 [2024-10-01 13:43:55.518887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.307 [2024-10-01 13:43:55.518907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.307 [2024-10-01 13:43:55.518949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.307 [2024-10-01 13:43:55.518984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.307 [2024-10-01 13:43:55.519002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.307 [2024-10-01 13:43:55.519016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.307 [2024-10-01 13:43:55.519048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.307 [2024-10-01 13:43:55.530127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.307 [2024-10-01 13:43:55.530319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.307 [2024-10-01 13:43:55.530356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.307 [2024-10-01 13:43:55.530376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.307 [2024-10-01 13:43:55.530411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.307 [2024-10-01 13:43:55.530444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.307 [2024-10-01 13:43:55.530462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.307 [2024-10-01 13:43:55.530477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.307 [2024-10-01 13:43:55.530510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.307 [2024-10-01 13:43:55.541884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.307 [2024-10-01 13:43:55.543516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.307 [2024-10-01 13:43:55.543586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.307 [2024-10-01 13:43:55.543609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.307 [2024-10-01 13:43:55.543789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.307 [2024-10-01 13:43:55.543842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.307 [2024-10-01 13:43:55.543863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.307 [2024-10-01 13:43:55.543895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.307 [2024-10-01 13:43:55.543932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.307 [2024-10-01 13:43:55.552097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.307 [2024-10-01 13:43:55.552229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.307 [2024-10-01 13:43:55.552263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.307 [2024-10-01 13:43:55.552283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.307 [2024-10-01 13:43:55.552317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.307 [2024-10-01 13:43:55.552350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.307 [2024-10-01 13:43:55.552368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.307 [2024-10-01 13:43:55.552383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.307 [2024-10-01 13:43:55.552425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.307 [2024-10-01 13:43:55.562595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.307 [2024-10-01 13:43:55.562736] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.307 [2024-10-01 13:43:55.562788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.307 [2024-10-01 13:43:55.562810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.307 [2024-10-01 13:43:55.562871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.307 [2024-10-01 13:43:55.562907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.307 [2024-10-01 13:43:55.562925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.307 [2024-10-01 13:43:55.562940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.307 [2024-10-01 13:43:55.562984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.307 [2024-10-01 13:43:55.574027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.307 [2024-10-01 13:43:55.574165] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.307 [2024-10-01 13:43:55.574200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.307 [2024-10-01 13:43:55.574218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.307 [2024-10-01 13:43:55.574253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.307 [2024-10-01 13:43:55.574286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.307 [2024-10-01 13:43:55.574305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.307 [2024-10-01 13:43:55.574319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.307 [2024-10-01 13:43:55.574352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.307 [2024-10-01 13:43:55.584410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.307 [2024-10-01 13:43:55.584553] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.307 [2024-10-01 13:43:55.584588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.307 [2024-10-01 13:43:55.584607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.307 [2024-10-01 13:43:55.584642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.307 [2024-10-01 13:43:55.584675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.307 [2024-10-01 13:43:55.584693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.307 [2024-10-01 13:43:55.584707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.307 [2024-10-01 13:43:55.584740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.307 [2024-10-01 13:43:55.595760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.307 [2024-10-01 13:43:55.595943] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.307 [2024-10-01 13:43:55.595980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.307 [2024-10-01 13:43:55.596001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.307 [2024-10-01 13:43:55.596037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.307 [2024-10-01 13:43:55.596071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.307 [2024-10-01 13:43:55.596089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.307 [2024-10-01 13:43:55.596104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.307 [2024-10-01 13:43:55.596176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.307 [2024-10-01 13:43:55.606621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.307 [2024-10-01 13:43:55.606786] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.307 [2024-10-01 13:43:55.606822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.307 [2024-10-01 13:43:55.606842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.307 [2024-10-01 13:43:55.606880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.307 [2024-10-01 13:43:55.606924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.307 [2024-10-01 13:43:55.606943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.307 [2024-10-01 13:43:55.606959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.307 [2024-10-01 13:43:55.606992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.308 [2024-10-01 13:43:55.618286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.308 [2024-10-01 13:43:55.618429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.308 [2024-10-01 13:43:55.618471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.308 [2024-10-01 13:43:55.618503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.308 [2024-10-01 13:43:55.618557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.308 [2024-10-01 13:43:55.618594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.308 [2024-10-01 13:43:55.618612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.308 [2024-10-01 13:43:55.618626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.308 [2024-10-01 13:43:55.618659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.308 [2024-10-01 13:43:55.628950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.308 [2024-10-01 13:43:55.629078] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.308 [2024-10-01 13:43:55.629118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.308 [2024-10-01 13:43:55.629139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.308 [2024-10-01 13:43:55.629174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.308 [2024-10-01 13:43:55.630113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.308 [2024-10-01 13:43:55.630155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.308 [2024-10-01 13:43:55.630179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.308 [2024-10-01 13:43:55.630380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.308 [2024-10-01 13:43:55.640169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.308 [2024-10-01 13:43:55.640895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.308 [2024-10-01 13:43:55.640963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.308 [2024-10-01 13:43:55.640987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.308 [2024-10-01 13:43:55.641077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.308 [2024-10-01 13:43:55.641141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.308 [2024-10-01 13:43:55.641166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.308 [2024-10-01 13:43:55.641181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.308 [2024-10-01 13:43:55.641215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.308 [2024-10-01 13:43:55.651220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.308 [2024-10-01 13:43:55.651371] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.308 [2024-10-01 13:43:55.651405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.308 [2024-10-01 13:43:55.651425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.308 [2024-10-01 13:43:55.651460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.308 [2024-10-01 13:43:55.651503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.308 [2024-10-01 13:43:55.651523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.308 [2024-10-01 13:43:55.651553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.308 [2024-10-01 13:43:55.651591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.308 [2024-10-01 13:43:55.664581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.308 8038.33 IOPS, 31.40 MiB/s [2024-10-01 13:43:55.666010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.308 [2024-10-01 13:43:55.666062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.308 [2024-10-01 13:43:55.666086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.308 [2024-10-01 13:43:55.666961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.308 [2024-10-01 13:43:55.667105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.308 [2024-10-01 13:43:55.667143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.308 [2024-10-01 13:43:55.667162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.308 [2024-10-01 13:43:55.667201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.308 [2024-10-01 13:43:55.675194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.308 [2024-10-01 13:43:55.675318] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.308 [2024-10-01 13:43:55.675352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.308 [2024-10-01 13:43:55.675371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.308 [2024-10-01 13:43:55.675405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.308 [2024-10-01 13:43:55.676706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.308 [2024-10-01 13:43:55.676747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.308 [2024-10-01 13:43:55.676766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.308 [2024-10-01 13:43:55.677658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.308 [2024-10-01 13:43:55.685295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.308 [2024-10-01 13:43:55.685423] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.308 [2024-10-01 13:43:55.685458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.308 [2024-10-01 13:43:55.685485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.308 [2024-10-01 13:43:55.685521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.308 [2024-10-01 13:43:55.685572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.308 [2024-10-01 13:43:55.685592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.308 [2024-10-01 13:43:55.685607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.308 [2024-10-01 13:43:55.685869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.308 [2024-10-01 13:43:55.696075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.308 [2024-10-01 13:43:55.696205] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.308 [2024-10-01 13:43:55.696249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.308 [2024-10-01 13:43:55.696270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.308 [2024-10-01 13:43:55.696311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.308 [2024-10-01 13:43:55.696344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.308 [2024-10-01 13:43:55.696361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.308 [2024-10-01 13:43:55.696376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.308 [2024-10-01 13:43:55.696441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.308 [2024-10-01 13:43:55.707937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.308 [2024-10-01 13:43:55.708133] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.308 [2024-10-01 13:43:55.708170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.308 [2024-10-01 13:43:55.708189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.308 [2024-10-01 13:43:55.708228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.308 [2024-10-01 13:43:55.708262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.308 [2024-10-01 13:43:55.708280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.308 [2024-10-01 13:43:55.708296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.308 [2024-10-01 13:43:55.708330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.308 [2024-10-01 13:43:55.718362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.308 [2024-10-01 13:43:55.718594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.308 [2024-10-01 13:43:55.718634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.308 [2024-10-01 13:43:55.718654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.308 [2024-10-01 13:43:55.719642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.308 [2024-10-01 13:43:55.719949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.308 [2024-10-01 13:43:55.719991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.308 [2024-10-01 13:43:55.720011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.308 [2024-10-01 13:43:55.720154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.308 [2024-10-01 13:43:55.729564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.308 [2024-10-01 13:43:55.729756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.309 [2024-10-01 13:43:55.729794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.309 [2024-10-01 13:43:55.729813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.309 [2024-10-01 13:43:55.729849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.309 [2024-10-01 13:43:55.729891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.309 [2024-10-01 13:43:55.729911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.309 [2024-10-01 13:43:55.729926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.309 [2024-10-01 13:43:55.729960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.309 [2024-10-01 13:43:55.739938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.309 [2024-10-01 13:43:55.740070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.309 [2024-10-01 13:43:55.740103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.309 [2024-10-01 13:43:55.740122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.309 [2024-10-01 13:43:55.740167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.309 [2024-10-01 13:43:55.740201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.309 [2024-10-01 13:43:55.740228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.309 [2024-10-01 13:43:55.740243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.309 [2024-10-01 13:43:55.740277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.309 [2024-10-01 13:43:55.751111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.309 [2024-10-01 13:43:55.751247] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.309 [2024-10-01 13:43:55.751282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.309 [2024-10-01 13:43:55.751330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.309 [2024-10-01 13:43:55.751369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.309 [2024-10-01 13:43:55.751403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.309 [2024-10-01 13:43:55.751427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.309 [2024-10-01 13:43:55.751442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.309 [2024-10-01 13:43:55.751475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.309 [2024-10-01 13:43:55.761331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.309 [2024-10-01 13:43:55.761466] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.309 [2024-10-01 13:43:55.761502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.309 [2024-10-01 13:43:55.761521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.309 [2024-10-01 13:43:55.762472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.309 [2024-10-01 13:43:55.762748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.309 [2024-10-01 13:43:55.762788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.309 [2024-10-01 13:43:55.762807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.309 [2024-10-01 13:43:55.762898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.309 [2024-10-01 13:43:55.772141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.309 [2024-10-01 13:43:55.772274] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.309 [2024-10-01 13:43:55.772309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.309 [2024-10-01 13:43:55.772327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.309 [2024-10-01 13:43:55.772361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.309 [2024-10-01 13:43:55.772393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.309 [2024-10-01 13:43:55.772412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.309 [2024-10-01 13:43:55.772426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.309 [2024-10-01 13:43:55.772459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.309 [2024-10-01 13:43:55.782244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.309 [2024-10-01 13:43:55.782379] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.309 [2024-10-01 13:43:55.782414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.309 [2024-10-01 13:43:55.782433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.309 [2024-10-01 13:43:55.782468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.309 [2024-10-01 13:43:55.782501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.309 [2024-10-01 13:43:55.782574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.309 [2024-10-01 13:43:55.782592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.309 [2024-10-01 13:43:55.782627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.309 [2024-10-01 13:43:55.793309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.309 [2024-10-01 13:43:55.793502] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.309 [2024-10-01 13:43:55.793555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.309 [2024-10-01 13:43:55.793578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.309 [2024-10-01 13:43:55.793615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.309 [2024-10-01 13:43:55.793649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.309 [2024-10-01 13:43:55.793666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.309 [2024-10-01 13:43:55.793682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.309 [2024-10-01 13:43:55.793715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.309 [2024-10-01 13:43:55.803449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.309 [2024-10-01 13:43:55.804528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.309 [2024-10-01 13:43:55.804593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.309 [2024-10-01 13:43:55.804617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.309 [2024-10-01 13:43:55.804813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.309 [2024-10-01 13:43:55.804915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.309 [2024-10-01 13:43:55.804938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.309 [2024-10-01 13:43:55.804953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.309 [2024-10-01 13:43:55.806202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.309 [2024-10-01 13:43:55.814111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.309 [2024-10-01 13:43:55.814234] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.309 [2024-10-01 13:43:55.814268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.309 [2024-10-01 13:43:55.814288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.309 [2024-10-01 13:43:55.814322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.309 [2024-10-01 13:43:55.814372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.309 [2024-10-01 13:43:55.814402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.309 [2024-10-01 13:43:55.814418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.309 [2024-10-01 13:43:55.814460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.309 [2024-10-01 13:43:55.824220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.309 [2024-10-01 13:43:55.824429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.310 [2024-10-01 13:43:55.824467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.310 [2024-10-01 13:43:55.824488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.310 [2024-10-01 13:43:55.824524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.310 [2024-10-01 13:43:55.824577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.310 [2024-10-01 13:43:55.824596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.310 [2024-10-01 13:43:55.824611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.310 [2024-10-01 13:43:55.824645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.310 [2024-10-01 13:43:55.835238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.310 [2024-10-01 13:43:55.835443] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.310 [2024-10-01 13:43:55.835481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.310 [2024-10-01 13:43:55.835501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.310 [2024-10-01 13:43:55.835552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.310 [2024-10-01 13:43:55.835609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.310 [2024-10-01 13:43:55.835632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.310 [2024-10-01 13:43:55.835648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.310 [2024-10-01 13:43:55.835682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.310 [2024-10-01 13:43:55.845418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.310 [2024-10-01 13:43:55.845597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.310 [2024-10-01 13:43:55.845640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.310 [2024-10-01 13:43:55.845660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.310 [2024-10-01 13:43:55.846606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.310 [2024-10-01 13:43:55.846823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.310 [2024-10-01 13:43:55.846866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.310 [2024-10-01 13:43:55.846884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.310 [2024-10-01 13:43:55.846967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.310 [2024-10-01 13:43:55.856137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.310 [2024-10-01 13:43:55.856274] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.310 [2024-10-01 13:43:55.856308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.310 [2024-10-01 13:43:55.856327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.310 [2024-10-01 13:43:55.856394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.310 [2024-10-01 13:43:55.856429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.310 [2024-10-01 13:43:55.856447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.310 [2024-10-01 13:43:55.856462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.310 [2024-10-01 13:43:55.856494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.310 [2024-10-01 13:43:55.866254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.310 [2024-10-01 13:43:55.866396] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.310 [2024-10-01 13:43:55.866430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.310 [2024-10-01 13:43:55.866449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.310 [2024-10-01 13:43:55.866483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.310 [2024-10-01 13:43:55.866517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.310 [2024-10-01 13:43:55.866553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.310 [2024-10-01 13:43:55.866571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.310 [2024-10-01 13:43:55.866605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.310 [2024-10-01 13:43:55.877964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.310 [2024-10-01 13:43:55.878122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.310 [2024-10-01 13:43:55.878160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.310 [2024-10-01 13:43:55.878180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.310 [2024-10-01 13:43:55.878215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.310 [2024-10-01 13:43:55.878249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.310 [2024-10-01 13:43:55.878267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.310 [2024-10-01 13:43:55.878282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.310 [2024-10-01 13:43:55.878315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.310 [2024-10-01 13:43:55.888499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.310 [2024-10-01 13:43:55.888726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.310 [2024-10-01 13:43:55.888770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.310 [2024-10-01 13:43:55.888791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.310 [2024-10-01 13:43:55.889739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.310 [2024-10-01 13:43:55.889964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.310 [2024-10-01 13:43:55.890001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.310 [2024-10-01 13:43:55.890050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.310 [2024-10-01 13:43:55.890136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.310 [2024-10-01 13:43:55.899381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.310 [2024-10-01 13:43:55.899508] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.310 [2024-10-01 13:43:55.899564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.310 [2024-10-01 13:43:55.899586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.310 [2024-10-01 13:43:55.899624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.310 [2024-10-01 13:43:55.899677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.310 [2024-10-01 13:43:55.899711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.310 [2024-10-01 13:43:55.899738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.310 [2024-10-01 13:43:55.899794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.310 [2024-10-01 13:43:55.909677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.310 [2024-10-01 13:43:55.909816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.310 [2024-10-01 13:43:55.909857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.310 [2024-10-01 13:43:55.909878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.310 [2024-10-01 13:43:55.909913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.310 [2024-10-01 13:43:55.909946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.310 [2024-10-01 13:43:55.909963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.310 [2024-10-01 13:43:55.909978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.310 [2024-10-01 13:43:55.910010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.310 [2024-10-01 13:43:55.920125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.310 [2024-10-01 13:43:55.921007] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.310 [2024-10-01 13:43:55.921064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.310 [2024-10-01 13:43:55.921087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.310 [2024-10-01 13:43:55.921271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.310 [2024-10-01 13:43:55.921321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.311 [2024-10-01 13:43:55.921341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.311 [2024-10-01 13:43:55.921356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.311 [2024-10-01 13:43:55.921390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.311 [2024-10-01 13:43:55.931733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.311 [2024-10-01 13:43:55.931946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.311 [2024-10-01 13:43:55.932027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.311 [2024-10-01 13:43:55.932052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.311 [2024-10-01 13:43:55.933026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.311 [2024-10-01 13:43:55.933262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.311 [2024-10-01 13:43:55.933300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.311 [2024-10-01 13:43:55.933320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.311 [2024-10-01 13:43:55.934632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.311 [2024-10-01 13:43:55.942677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.311 [2024-10-01 13:43:55.942816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.311 [2024-10-01 13:43:55.942851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.311 [2024-10-01 13:43:55.942870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.311 [2024-10-01 13:43:55.942904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.311 [2024-10-01 13:43:55.942937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.311 [2024-10-01 13:43:55.942955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.311 [2024-10-01 13:43:55.942969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.311 [2024-10-01 13:43:55.943002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.311 [2024-10-01 13:43:55.952928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.311 [2024-10-01 13:43:55.953070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.311 [2024-10-01 13:43:55.953103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.311 [2024-10-01 13:43:55.953122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.311 [2024-10-01 13:43:55.953156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.311 [2024-10-01 13:43:55.953189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.311 [2024-10-01 13:43:55.953207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.311 [2024-10-01 13:43:55.953221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.311 [2024-10-01 13:43:55.953253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.311 [2024-10-01 13:43:55.964110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.311 [2024-10-01 13:43:55.964240] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.311 [2024-10-01 13:43:55.964273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.311 [2024-10-01 13:43:55.964291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.311 [2024-10-01 13:43:55.964325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.311 [2024-10-01 13:43:55.964382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.311 [2024-10-01 13:43:55.964401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.311 [2024-10-01 13:43:55.964416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.311 [2024-10-01 13:43:55.964448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.311 [2024-10-01 13:43:55.974393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.311 [2024-10-01 13:43:55.974518] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.311 [2024-10-01 13:43:55.974565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.311 [2024-10-01 13:43:55.974586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.311 [2024-10-01 13:43:55.974634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.311 [2024-10-01 13:43:55.975568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.311 [2024-10-01 13:43:55.975607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.311 [2024-10-01 13:43:55.975625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.311 [2024-10-01 13:43:55.975826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.311 [2024-10-01 13:43:55.985277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.311 [2024-10-01 13:43:55.985398] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.311 [2024-10-01 13:43:55.985431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.311 [2024-10-01 13:43:55.985449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.311 [2024-10-01 13:43:55.985482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.311 [2024-10-01 13:43:55.985514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.311 [2024-10-01 13:43:55.985532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.311 [2024-10-01 13:43:55.985564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.311 [2024-10-01 13:43:55.985597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.311 [2024-10-01 13:43:55.995412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.311 [2024-10-01 13:43:55.995552] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.311 [2024-10-01 13:43:55.995586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.311 [2024-10-01 13:43:55.995604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.311 [2024-10-01 13:43:55.995639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.311 [2024-10-01 13:43:55.995672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.311 [2024-10-01 13:43:55.995689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.311 [2024-10-01 13:43:55.995703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.311 [2024-10-01 13:43:55.995759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.311 [2024-10-01 13:43:56.006561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.311 [2024-10-01 13:43:56.006688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.311 [2024-10-01 13:43:56.006721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.311 [2024-10-01 13:43:56.006740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.311 [2024-10-01 13:43:56.006775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.311 [2024-10-01 13:43:56.006807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.312 [2024-10-01 13:43:56.006825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.312 [2024-10-01 13:43:56.006840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.312 [2024-10-01 13:43:56.006872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.312 [2024-10-01 13:43:56.016739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.312 [2024-10-01 13:43:56.016861] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.312 [2024-10-01 13:43:56.016894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.312 [2024-10-01 13:43:56.016912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.312 [2024-10-01 13:43:56.016960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.312 [2024-10-01 13:43:56.017907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.312 [2024-10-01 13:43:56.017946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.312 [2024-10-01 13:43:56.017966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.312 [2024-10-01 13:43:56.018155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.312 [2024-10-01 13:43:56.027606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.312 [2024-10-01 13:43:56.027744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.312 [2024-10-01 13:43:56.027779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.312 [2024-10-01 13:43:56.027798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.312 [2024-10-01 13:43:56.027833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.312 [2024-10-01 13:43:56.027866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.312 [2024-10-01 13:43:56.027897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.312 [2024-10-01 13:43:56.027912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.312 [2024-10-01 13:43:56.027945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.312 [2024-10-01 13:43:56.038055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.312 [2024-10-01 13:43:56.038200] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.312 [2024-10-01 13:43:56.038235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.312 [2024-10-01 13:43:56.038297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.312 [2024-10-01 13:43:56.038336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.312 [2024-10-01 13:43:56.038369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.312 [2024-10-01 13:43:56.038387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.312 [2024-10-01 13:43:56.038402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.312 [2024-10-01 13:43:56.038435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.312 [2024-10-01 13:43:56.049550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.312 [2024-10-01 13:43:56.049702] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.312 [2024-10-01 13:43:56.049746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.312 [2024-10-01 13:43:56.049772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.312 [2024-10-01 13:43:56.049810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.312 [2024-10-01 13:43:56.049844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.312 [2024-10-01 13:43:56.049863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.312 [2024-10-01 13:43:56.049878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.312 [2024-10-01 13:43:56.049911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.312 [2024-10-01 13:43:56.060356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.312 [2024-10-01 13:43:56.060486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.312 [2024-10-01 13:43:56.060520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.312 [2024-10-01 13:43:56.060555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.312 [2024-10-01 13:43:56.060594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.312 [2024-10-01 13:43:56.060627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.312 [2024-10-01 13:43:56.060645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.312 [2024-10-01 13:43:56.060660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.312 [2024-10-01 13:43:56.060693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.312 [2024-10-01 13:43:56.071849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.312 [2024-10-01 13:43:56.072080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.312 [2024-10-01 13:43:56.072119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.312 [2024-10-01 13:43:56.072140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.312 [2024-10-01 13:43:56.072178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.312 [2024-10-01 13:43:56.072213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.312 [2024-10-01 13:43:56.072262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.312 [2024-10-01 13:43:56.072280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.312 [2024-10-01 13:43:56.072315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.312 [2024-10-01 13:43:56.082269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.312 [2024-10-01 13:43:56.082438] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.312 [2024-10-01 13:43:56.082475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.312 [2024-10-01 13:43:56.082495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.312 [2024-10-01 13:43:56.082532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.312 [2024-10-01 13:43:56.082586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.312 [2024-10-01 13:43:56.082604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.312 [2024-10-01 13:43:56.082619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.312 [2024-10-01 13:43:56.082652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.312 [2024-10-01 13:43:56.093339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.312 [2024-10-01 13:43:56.093473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.312 [2024-10-01 13:43:56.093514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.312 [2024-10-01 13:43:56.093548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.312 [2024-10-01 13:43:56.093587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.312 [2024-10-01 13:43:56.093636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.313 [2024-10-01 13:43:56.093659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.313 [2024-10-01 13:43:56.093674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.313 [2024-10-01 13:43:56.093707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.313 [2024-10-01 13:43:56.103475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.313 [2024-10-01 13:43:56.103617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.313 [2024-10-01 13:43:56.103652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.313 [2024-10-01 13:43:56.103671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.313 [2024-10-01 13:43:56.104628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.313 [2024-10-01 13:43:56.104848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.313 [2024-10-01 13:43:56.104884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.313 [2024-10-01 13:43:56.104902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.313 [2024-10-01 13:43:56.104981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.313 [2024-10-01 13:43:56.114308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.313 [2024-10-01 13:43:56.114495] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.313 [2024-10-01 13:43:56.114531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.313 [2024-10-01 13:43:56.114567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.313 [2024-10-01 13:43:56.114603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.313 [2024-10-01 13:43:56.114637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.313 [2024-10-01 13:43:56.114654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.313 [2024-10-01 13:43:56.114669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.313 [2024-10-01 13:43:56.114701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.313 [2024-10-01 13:43:56.124522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.313 [2024-10-01 13:43:56.124662] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.313 [2024-10-01 13:43:56.124696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.313 [2024-10-01 13:43:56.124714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.313 [2024-10-01 13:43:56.124748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.313 [2024-10-01 13:43:56.124780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.313 [2024-10-01 13:43:56.124798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.313 [2024-10-01 13:43:56.124812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.313 [2024-10-01 13:43:56.124844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.313 [2024-10-01 13:43:56.135628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.313 [2024-10-01 13:43:56.135760] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.313 [2024-10-01 13:43:56.135794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.313 [2024-10-01 13:43:56.135813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.313 [2024-10-01 13:43:56.135847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.313 [2024-10-01 13:43:56.135910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.313 [2024-10-01 13:43:56.135934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.313 [2024-10-01 13:43:56.135949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.313 [2024-10-01 13:43:56.135983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.313 [2024-10-01 13:43:56.145776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.313 [2024-10-01 13:43:56.145903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.313 [2024-10-01 13:43:56.145937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.313 [2024-10-01 13:43:56.145955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.313 [2024-10-01 13:43:56.146923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.313 [2024-10-01 13:43:56.147153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.313 [2024-10-01 13:43:56.147189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.313 [2024-10-01 13:43:56.147207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.313 [2024-10-01 13:43:56.147287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.313 [2024-10-01 13:43:56.156582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.313 [2024-10-01 13:43:56.156706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.313 [2024-10-01 13:43:56.156742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.313 [2024-10-01 13:43:56.156761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.313 [2024-10-01 13:43:56.156808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.313 [2024-10-01 13:43:56.156846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.313 [2024-10-01 13:43:56.156865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.313 [2024-10-01 13:43:56.156879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.313 [2024-10-01 13:43:56.156911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.313 [2024-10-01 13:43:56.166684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.313 [2024-10-01 13:43:56.166812] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.313 [2024-10-01 13:43:56.166846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.313 [2024-10-01 13:43:56.166865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.313 [2024-10-01 13:43:56.166898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.313 [2024-10-01 13:43:56.166931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.313 [2024-10-01 13:43:56.166950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.313 [2024-10-01 13:43:56.166965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.313 [2024-10-01 13:43:56.166996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.313 [2024-10-01 13:43:56.177888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.313 [2024-10-01 13:43:56.178094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.313 [2024-10-01 13:43:56.178131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.313 [2024-10-01 13:43:56.178152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.314 [2024-10-01 13:43:56.178204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.314 [2024-10-01 13:43:56.178242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.314 [2024-10-01 13:43:56.178261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.314 [2024-10-01 13:43:56.178310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.314 [2024-10-01 13:43:56.178346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.314 [2024-10-01 13:43:56.188697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.314 [2024-10-01 13:43:56.188830] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.314 [2024-10-01 13:43:56.188869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.314 [2024-10-01 13:43:56.188888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.314 [2024-10-01 13:43:56.188922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.314 [2024-10-01 13:43:56.189866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.314 [2024-10-01 13:43:56.189908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.314 [2024-10-01 13:43:56.189926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.314 [2024-10-01 13:43:56.190142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.314 [2024-10-01 13:43:56.199768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.314 [2024-10-01 13:43:56.199970] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.314 [2024-10-01 13:43:56.200009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.314 [2024-10-01 13:43:56.200029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.314 [2024-10-01 13:43:56.200066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.314 [2024-10-01 13:43:56.200100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.314 [2024-10-01 13:43:56.200119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.314 [2024-10-01 13:43:56.200134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.314 [2024-10-01 13:43:56.200168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.314 [2024-10-01 13:43:56.209932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.314 [2024-10-01 13:43:56.210063] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.314 [2024-10-01 13:43:56.210096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.314 [2024-10-01 13:43:56.210115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.314 [2024-10-01 13:43:56.210149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.314 [2024-10-01 13:43:56.210181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.314 [2024-10-01 13:43:56.210198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.314 [2024-10-01 13:43:56.210212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.314 [2024-10-01 13:43:56.210244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.314 [2024-10-01 13:43:56.221110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.314 [2024-10-01 13:43:56.221245] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.314 [2024-10-01 13:43:56.221308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.314 [2024-10-01 13:43:56.221329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.314 [2024-10-01 13:43:56.221364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.314 [2024-10-01 13:43:56.221398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.314 [2024-10-01 13:43:56.221416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.314 [2024-10-01 13:43:56.221430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.314 [2024-10-01 13:43:56.221463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.314 [2024-10-01 13:43:56.232133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.314 [2024-10-01 13:43:56.232264] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.314 [2024-10-01 13:43:56.232298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.314 [2024-10-01 13:43:56.232317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.314 [2024-10-01 13:43:56.232350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.314 [2024-10-01 13:43:56.232383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.314 [2024-10-01 13:43:56.232401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.315 [2024-10-01 13:43:56.232416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.315 [2024-10-01 13:43:56.232447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.315 [2024-10-01 13:43:56.243218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.315 [2024-10-01 13:43:56.243345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.315 [2024-10-01 13:43:56.243380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.315 [2024-10-01 13:43:56.243398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.315 [2024-10-01 13:43:56.243432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.315 [2024-10-01 13:43:56.243464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.315 [2024-10-01 13:43:56.243482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.315 [2024-10-01 13:43:56.243496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.315 [2024-10-01 13:43:56.243529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.315 [2024-10-01 13:43:56.253352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.315 [2024-10-01 13:43:56.253495] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.315 [2024-10-01 13:43:56.253552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.315 [2024-10-01 13:43:56.253576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.315 [2024-10-01 13:43:56.253612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.315 [2024-10-01 13:43:56.253679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.315 [2024-10-01 13:43:56.253698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.315 [2024-10-01 13:43:56.253719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.315 [2024-10-01 13:43:56.253752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.315 [2024-10-01 13:43:56.264557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.315 [2024-10-01 13:43:56.264710] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.315 [2024-10-01 13:43:56.264744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.315 [2024-10-01 13:43:56.264763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.315 [2024-10-01 13:43:56.264799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.315 [2024-10-01 13:43:56.264832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.315 [2024-10-01 13:43:56.264851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.315 [2024-10-01 13:43:56.264866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.315 [2024-10-01 13:43:56.264898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.315 [2024-10-01 13:43:56.275219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.315 [2024-10-01 13:43:56.275353] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.315 [2024-10-01 13:43:56.275392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.315 [2024-10-01 13:43:56.275411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.315 [2024-10-01 13:43:56.275446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.315 [2024-10-01 13:43:56.275479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.315 [2024-10-01 13:43:56.275496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.315 [2024-10-01 13:43:56.275510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.315 [2024-10-01 13:43:56.275560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.315 [2024-10-01 13:43:56.286804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.315 [2024-10-01 13:43:56.286961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.315 [2024-10-01 13:43:56.286998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.315 [2024-10-01 13:43:56.287018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.315 [2024-10-01 13:43:56.287053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.315 [2024-10-01 13:43:56.287087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.315 [2024-10-01 13:43:56.287105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.315 [2024-10-01 13:43:56.287120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.315 [2024-10-01 13:43:56.287190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.315 [2024-10-01 13:43:56.297319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.315 [2024-10-01 13:43:56.297477] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.315 [2024-10-01 13:43:56.297513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.315 [2024-10-01 13:43:56.297548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.315 [2024-10-01 13:43:56.297589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.315 [2024-10-01 13:43:56.297623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.315 [2024-10-01 13:43:56.297641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.315 [2024-10-01 13:43:56.297656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.315 [2024-10-01 13:43:56.297689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.315 [2024-10-01 13:43:56.307609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.315 [2024-10-01 13:43:56.307746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.315 [2024-10-01 13:43:56.307781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.315 [2024-10-01 13:43:56.307801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.315 [2024-10-01 13:43:56.307835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.315 [2024-10-01 13:43:56.307879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.315 [2024-10-01 13:43:56.307900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.315 [2024-10-01 13:43:56.307916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.315 [2024-10-01 13:43:56.307949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.315 [2024-10-01 13:43:56.317977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.315 [2024-10-01 13:43:56.318112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.315 [2024-10-01 13:43:56.318148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.315 [2024-10-01 13:43:56.318176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.315 [2024-10-01 13:43:56.319131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.315 [2024-10-01 13:43:56.319798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.315 [2024-10-01 13:43:56.319839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.315 [2024-10-01 13:43:56.319858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.315 [2024-10-01 13:43:56.319968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.315 [2024-10-01 13:43:56.328092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.315 [2024-10-01 13:43:56.328226] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.315 [2024-10-01 13:43:56.328260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.315 [2024-10-01 13:43:56.328306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.315 [2024-10-01 13:43:56.328343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.315 [2024-10-01 13:43:56.328376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.315 [2024-10-01 13:43:56.328394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.315 [2024-10-01 13:43:56.328408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.315 [2024-10-01 13:43:56.329179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.315 [2024-10-01 13:43:56.338300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.315 [2024-10-01 13:43:56.338491] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.315 [2024-10-01 13:43:56.338528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.315 [2024-10-01 13:43:56.338564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.315 [2024-10-01 13:43:56.338603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.315 [2024-10-01 13:43:56.338667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.315 [2024-10-01 13:43:56.338686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.315 [2024-10-01 13:43:56.338702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.315 [2024-10-01 13:43:56.338736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.315 [2024-10-01 13:43:56.348451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.315 [2024-10-01 13:43:56.348656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.315 [2024-10-01 13:43:56.348693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.315 [2024-10-01 13:43:56.348712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.315 [2024-10-01 13:43:56.348749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.315 [2024-10-01 13:43:56.348783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.315 [2024-10-01 13:43:56.348814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.315 [2024-10-01 13:43:56.348835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.315 [2024-10-01 13:43:56.348871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.315 [2024-10-01 13:43:56.359520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.316 [2024-10-01 13:43:56.359684] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.316 [2024-10-01 13:43:56.359719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.316 [2024-10-01 13:43:56.359739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.316 [2024-10-01 13:43:56.360705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.316 [2024-10-01 13:43:56.360922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.316 [2024-10-01 13:43:56.360982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.316 [2024-10-01 13:43:56.361001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.316 [2024-10-01 13:43:56.361094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.316 [2024-10-01 13:43:56.370277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.316 [2024-10-01 13:43:56.370402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.316 [2024-10-01 13:43:56.370437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.316 [2024-10-01 13:43:56.370455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.316 [2024-10-01 13:43:56.370489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.316 [2024-10-01 13:43:56.370528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.316 [2024-10-01 13:43:56.370562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.316 [2024-10-01 13:43:56.370578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.316 [2024-10-01 13:43:56.370612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.316 [2024-10-01 13:43:56.380420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.316 [2024-10-01 13:43:56.380579] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.316 [2024-10-01 13:43:56.380630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.316 [2024-10-01 13:43:56.380656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.316 [2024-10-01 13:43:56.380693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.316 [2024-10-01 13:43:56.380734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.316 [2024-10-01 13:43:56.380752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.316 [2024-10-01 13:43:56.380767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.316 [2024-10-01 13:43:56.380800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.316 [2024-10-01 13:43:56.391787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.316 [2024-10-01 13:43:56.392001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.316 [2024-10-01 13:43:56.392039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.316 [2024-10-01 13:43:56.392060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.316 [2024-10-01 13:43:56.392106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.316 [2024-10-01 13:43:56.392140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.316 [2024-10-01 13:43:56.392158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.316 [2024-10-01 13:43:56.392174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.316 [2024-10-01 13:43:56.392207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.316 [2024-10-01 13:43:56.402025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.316 [2024-10-01 13:43:56.402190] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.316 [2024-10-01 13:43:56.402224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.316 [2024-10-01 13:43:56.402244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.316 [2024-10-01 13:43:56.403217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.316 [2024-10-01 13:43:56.403447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.316 [2024-10-01 13:43:56.403484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.316 [2024-10-01 13:43:56.403503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.316 [2024-10-01 13:43:56.403600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.316 [2024-10-01 13:43:56.412876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.316 [2024-10-01 13:43:56.413008] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.316 [2024-10-01 13:43:56.413042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.316 [2024-10-01 13:43:56.413060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.316 [2024-10-01 13:43:56.413094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.316 [2024-10-01 13:43:56.413136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.316 [2024-10-01 13:43:56.413154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.316 [2024-10-01 13:43:56.413169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.316 [2024-10-01 13:43:56.413208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.316 [2024-10-01 13:43:56.422990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.316 [2024-10-01 13:43:56.423128] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.316 [2024-10-01 13:43:56.423163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.316 [2024-10-01 13:43:56.423183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.316 [2024-10-01 13:43:56.423217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.316 [2024-10-01 13:43:56.423250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.316 [2024-10-01 13:43:56.423268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.316 [2024-10-01 13:43:56.423282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.316 [2024-10-01 13:43:56.423319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.316 [2024-10-01 13:43:56.434039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.316 [2024-10-01 13:43:56.434164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.316 [2024-10-01 13:43:56.434198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.316 [2024-10-01 13:43:56.434216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.316 [2024-10-01 13:43:56.434275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.316 [2024-10-01 13:43:56.434309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.316 [2024-10-01 13:43:56.434327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.316 [2024-10-01 13:43:56.434341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.316 [2024-10-01 13:43:56.434373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.316 [2024-10-01 13:43:56.444231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.316 [2024-10-01 13:43:56.444356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.316 [2024-10-01 13:43:56.444395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.316 [2024-10-01 13:43:56.444414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.316 [2024-10-01 13:43:56.444461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.316 [2024-10-01 13:43:56.444498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.316 [2024-10-01 13:43:56.444515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.316 [2024-10-01 13:43:56.444530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.316 [2024-10-01 13:43:56.445486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.316 [2024-10-01 13:43:56.455164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.316 [2024-10-01 13:43:56.455362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.316 [2024-10-01 13:43:56.455401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.316 [2024-10-01 13:43:56.455421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.316 [2024-10-01 13:43:56.455457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.316 [2024-10-01 13:43:56.455491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.316 [2024-10-01 13:43:56.455510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.316 [2024-10-01 13:43:56.455526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.316 [2024-10-01 13:43:56.455578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.316 [2024-10-01 13:43:56.465300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.316 [2024-10-01 13:43:56.465429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.316 [2024-10-01 13:43:56.465462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.316 [2024-10-01 13:43:56.465480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.316 [2024-10-01 13:43:56.465513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.316 [2024-10-01 13:43:56.465561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.316 [2024-10-01 13:43:56.465582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.316 [2024-10-01 13:43:56.465623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.316 [2024-10-01 13:43:56.465658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.316 [2024-10-01 13:43:56.476305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.316 [2024-10-01 13:43:56.476436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.316 [2024-10-01 13:43:56.476471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.316 [2024-10-01 13:43:56.476491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.316 [2024-10-01 13:43:56.476526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.316 [2024-10-01 13:43:56.476576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.316 [2024-10-01 13:43:56.476595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.316 [2024-10-01 13:43:56.476610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.316 [2024-10-01 13:43:56.476642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.316 [2024-10-01 13:43:56.486409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.316 [2024-10-01 13:43:56.486547] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.316 [2024-10-01 13:43:56.486581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.316 [2024-10-01 13:43:56.486600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.316 [2024-10-01 13:43:56.486648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.316 [2024-10-01 13:43:56.486685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.316 [2024-10-01 13:43:56.486703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.316 [2024-10-01 13:43:56.486718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.316 [2024-10-01 13:43:56.486750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.316 [2024-10-01 13:43:56.496507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.316 [2024-10-01 13:43:56.496643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.316 [2024-10-01 13:43:56.496678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.316 [2024-10-01 13:43:56.496696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.316 [2024-10-01 13:43:56.496729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.317 [2024-10-01 13:43:56.496762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.317 [2024-10-01 13:43:56.496780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.317 [2024-10-01 13:43:56.496794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.317 [2024-10-01 13:43:56.496826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.317 [2024-10-01 13:43:56.508156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.317 [2024-10-01 13:43:56.508551] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.317 [2024-10-01 13:43:56.508634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.317 [2024-10-01 13:43:56.508659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.317 [2024-10-01 13:43:56.508707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.317 [2024-10-01 13:43:56.508745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.317 [2024-10-01 13:43:56.508763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.317 [2024-10-01 13:43:56.508778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.317 [2024-10-01 13:43:56.508815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.317 [2024-10-01 13:43:56.518297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.317 [2024-10-01 13:43:56.518444] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.317 [2024-10-01 13:43:56.518479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.317 [2024-10-01 13:43:56.518497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.317 [2024-10-01 13:43:56.518531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.317 [2024-10-01 13:43:56.518581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.317 [2024-10-01 13:43:56.518600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.317 [2024-10-01 13:43:56.518615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.317 [2024-10-01 13:43:56.518647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.317 [2024-10-01 13:43:56.528431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.317 [2024-10-01 13:43:56.528584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.317 [2024-10-01 13:43:56.528619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.317 [2024-10-01 13:43:56.528638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.317 [2024-10-01 13:43:56.528672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.317 [2024-10-01 13:43:56.528705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.317 [2024-10-01 13:43:56.528723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.317 [2024-10-01 13:43:56.528737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.317 [2024-10-01 13:43:56.528769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.317 [2024-10-01 13:43:56.539177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.317 [2024-10-01 13:43:56.539319] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.317 [2024-10-01 13:43:56.539353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.317 [2024-10-01 13:43:56.539372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.317 [2024-10-01 13:43:56.539406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.317 [2024-10-01 13:43:56.539485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.317 [2024-10-01 13:43:56.539506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.317 [2024-10-01 13:43:56.539525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.317 [2024-10-01 13:43:56.539600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.317 [2024-10-01 13:43:56.549433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.317 [2024-10-01 13:43:56.549579] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.317 [2024-10-01 13:43:56.549615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.317 [2024-10-01 13:43:56.549634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.317 [2024-10-01 13:43:56.549670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.317 [2024-10-01 13:43:56.549703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.317 [2024-10-01 13:43:56.549721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.317 [2024-10-01 13:43:56.549736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.317 [2024-10-01 13:43:56.549768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.317 [2024-10-01 13:43:56.559553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.317 [2024-10-01 13:43:56.559680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.317 [2024-10-01 13:43:56.559713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.317 [2024-10-01 13:43:56.559732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.317 [2024-10-01 13:43:56.559766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.317 [2024-10-01 13:43:56.559798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.317 [2024-10-01 13:43:56.559816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.317 [2024-10-01 13:43:56.559830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.317 [2024-10-01 13:43:56.559862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.317 [2024-10-01 13:43:56.570259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.317 [2024-10-01 13:43:56.570465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.317 [2024-10-01 13:43:56.570502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.317 [2024-10-01 13:43:56.570522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.317 [2024-10-01 13:43:56.570576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.317 [2024-10-01 13:43:56.570612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.317 [2024-10-01 13:43:56.570630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.317 [2024-10-01 13:43:56.570646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.317 [2024-10-01 13:43:56.570718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.317 [2024-10-01 13:43:56.580612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.317 [2024-10-01 13:43:56.580738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.317 [2024-10-01 13:43:56.580772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.317 [2024-10-01 13:43:56.580790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.317 [2024-10-01 13:43:56.580824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.317 [2024-10-01 13:43:56.581755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.317 [2024-10-01 13:43:56.581795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.317 [2024-10-01 13:43:56.581813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.317 [2024-10-01 13:43:56.582005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.317 [2024-10-01 13:43:56.591510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.317 [2024-10-01 13:43:56.591651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.317 [2024-10-01 13:43:56.591685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.317 [2024-10-01 13:43:56.591704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.317 [2024-10-01 13:43:56.591738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.317 [2024-10-01 13:43:56.591771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.317 [2024-10-01 13:43:56.591788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.317 [2024-10-01 13:43:56.591802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.317 [2024-10-01 13:43:56.591834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.317 [2024-10-01 13:43:56.601769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.317 [2024-10-01 13:43:56.601908] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.317 [2024-10-01 13:43:56.601959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.317 [2024-10-01 13:43:56.601980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.317 [2024-10-01 13:43:56.602015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.317 [2024-10-01 13:43:56.602048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.317 [2024-10-01 13:43:56.602066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.317 [2024-10-01 13:43:56.602085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.317 [2024-10-01 13:43:56.602125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.317 [2024-10-01 13:43:56.612914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.317 [2024-10-01 13:43:56.613048] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.317 [2024-10-01 13:43:56.613088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.317 [2024-10-01 13:43:56.613134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.317 [2024-10-01 13:43:56.613172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.317 [2024-10-01 13:43:56.613206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.317 [2024-10-01 13:43:56.613224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.317 [2024-10-01 13:43:56.613240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.317 [2024-10-01 13:43:56.613273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.317 [2024-10-01 13:43:56.623022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.317 [2024-10-01 13:43:56.623165] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.317 [2024-10-01 13:43:56.623199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.317 [2024-10-01 13:43:56.623217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.317 [2024-10-01 13:43:56.624191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.317 [2024-10-01 13:43:56.624440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.317 [2024-10-01 13:43:56.624479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.317 [2024-10-01 13:43:56.624498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.317 [2024-10-01 13:43:56.624595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.317 [2024-10-01 13:43:56.633937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.317 [2024-10-01 13:43:56.634113] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.317 [2024-10-01 13:43:56.634151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.317 [2024-10-01 13:43:56.634172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.318 [2024-10-01 13:43:56.634219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.318 [2024-10-01 13:43:56.634253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.318 [2024-10-01 13:43:56.634271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.318 [2024-10-01 13:43:56.634287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.318 [2024-10-01 13:43:56.634584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.318 [2024-10-01 13:43:56.644093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.318 [2024-10-01 13:43:56.644314] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.318 [2024-10-01 13:43:56.644352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.318 [2024-10-01 13:43:56.644381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.318 [2024-10-01 13:43:56.644423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.318 [2024-10-01 13:43:56.644458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.318 [2024-10-01 13:43:56.644506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.318 [2024-10-01 13:43:56.644524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.318 [2024-10-01 13:43:56.644576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.318 [2024-10-01 13:43:56.655333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.318 [2024-10-01 13:43:56.655498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.318 [2024-10-01 13:43:56.655551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.318 [2024-10-01 13:43:56.655574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.318 [2024-10-01 13:43:56.655611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.318 [2024-10-01 13:43:56.655645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.318 [2024-10-01 13:43:56.655663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.318 [2024-10-01 13:43:56.655678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.318 [2024-10-01 13:43:56.655710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.318 8218.75 IOPS, 32.10 MiB/s [2024-10-01 13:43:56.667080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.318 [2024-10-01 13:43:56.668730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.318 [2024-10-01 13:43:56.668779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.318 [2024-10-01 13:43:56.668801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.318 [2024-10-01 13:43:56.669720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.318 [2024-10-01 13:43:56.669925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.318 [2024-10-01 13:43:56.669965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.318 [2024-10-01 13:43:56.669983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.318 [2024-10-01 13:43:56.670024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.318 [2024-10-01 13:43:56.677182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.318 [2024-10-01 13:43:56.677306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.318 [2024-10-01 13:43:56.677344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.318 [2024-10-01 13:43:56.677364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.318 [2024-10-01 13:43:56.677398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.318 [2024-10-01 13:43:56.677431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.318 [2024-10-01 13:43:56.677448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.318 [2024-10-01 13:43:56.677462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.318 [2024-10-01 13:43:56.677500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.318 [2024-10-01 13:43:56.687280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.318 [2024-10-01 13:43:56.687409] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.318 [2024-10-01 13:43:56.687444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.318 [2024-10-01 13:43:56.687462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.318 [2024-10-01 13:43:56.687496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.318 [2024-10-01 13:43:56.688342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.318 [2024-10-01 13:43:56.688384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.318 [2024-10-01 13:43:56.688403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.318 [2024-10-01 13:43:56.688622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.318 [2024-10-01 13:43:56.697382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.318 [2024-10-01 13:43:56.697504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.318 [2024-10-01 13:43:56.697551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.318 [2024-10-01 13:43:56.697573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.318 [2024-10-01 13:43:56.697607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.318 [2024-10-01 13:43:56.697640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.318 [2024-10-01 13:43:56.697657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.318 [2024-10-01 13:43:56.697671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.318 [2024-10-01 13:43:56.697703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.318 [2024-10-01 13:43:56.707903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.318 [2024-10-01 13:43:56.708041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.318 [2024-10-01 13:43:56.708075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.318 [2024-10-01 13:43:56.708094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.318 [2024-10-01 13:43:56.709058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.318 [2024-10-01 13:43:56.709305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.318 [2024-10-01 13:43:56.709345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.318 [2024-10-01 13:43:56.709365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.318 [2024-10-01 13:43:56.709446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.318 [2024-10-01 13:43:56.719138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.318 [2024-10-01 13:43:56.719260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.318 [2024-10-01 13:43:56.719293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.318 [2024-10-01 13:43:56.719311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.318 [2024-10-01 13:43:56.719366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.318 [2024-10-01 13:43:56.719400] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.318 [2024-10-01 13:43:56.719418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.318 [2024-10-01 13:43:56.719432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.318 [2024-10-01 13:43:56.719464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.318 [2024-10-01 13:43:56.729444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.318 [2024-10-01 13:43:56.729600] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.318 [2024-10-01 13:43:56.729636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.318 [2024-10-01 13:43:56.729656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.318 [2024-10-01 13:43:56.729691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.318 [2024-10-01 13:43:56.729724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.319 [2024-10-01 13:43:56.729741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.319 [2024-10-01 13:43:56.729764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.319 [2024-10-01 13:43:56.729805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.319 [2024-10-01 13:43:56.741023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.319 [2024-10-01 13:43:56.741149] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.319 [2024-10-01 13:43:56.741187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.319 [2024-10-01 13:43:56.741208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.319 [2024-10-01 13:43:56.741243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.319 [2024-10-01 13:43:56.741276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.319 [2024-10-01 13:43:56.741293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.319 [2024-10-01 13:43:56.741307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.319 [2024-10-01 13:43:56.741340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.319 [2024-10-01 13:43:56.751122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.319 [2024-10-01 13:43:56.751245] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.319 [2024-10-01 13:43:56.751279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.319 [2024-10-01 13:43:56.751297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.319 [2024-10-01 13:43:56.752265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.319 [2024-10-01 13:43:56.752500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.319 [2024-10-01 13:43:56.752553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.319 [2024-10-01 13:43:56.752594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.319 [2024-10-01 13:43:56.752679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.319 [2024-10-01 13:43:56.762304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.319 [2024-10-01 13:43:56.762442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.319 [2024-10-01 13:43:56.762477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.319 [2024-10-01 13:43:56.762496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.319 [2024-10-01 13:43:56.762530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.319 [2024-10-01 13:43:56.762583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.319 [2024-10-01 13:43:56.762602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.319 [2024-10-01 13:43:56.762616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.319 [2024-10-01 13:43:56.762894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.319 [2024-10-01 13:43:56.772431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.319 [2024-10-01 13:43:56.772572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.319 [2024-10-01 13:43:56.772607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.319 [2024-10-01 13:43:56.772626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.319 [2024-10-01 13:43:56.772661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.319 [2024-10-01 13:43:56.772694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.319 [2024-10-01 13:43:56.772712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.319 [2024-10-01 13:43:56.772727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.319 [2024-10-01 13:43:56.772759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.319 [2024-10-01 13:43:56.783957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.319 [2024-10-01 13:43:56.784166] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.319 [2024-10-01 13:43:56.784203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.319 [2024-10-01 13:43:56.784223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.319 [2024-10-01 13:43:56.784260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.319 [2024-10-01 13:43:56.784294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.319 [2024-10-01 13:43:56.784311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.319 [2024-10-01 13:43:56.784328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.319 [2024-10-01 13:43:56.784360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.319 [2024-10-01 13:43:56.794521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.319 [2024-10-01 13:43:56.794699] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.319 [2024-10-01 13:43:56.794734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.319 [2024-10-01 13:43:56.794773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.319 [2024-10-01 13:43:56.795762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.319 [2024-10-01 13:43:56.796012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.319 [2024-10-01 13:43:56.796052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.319 [2024-10-01 13:43:56.796071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.319 [2024-10-01 13:43:56.796158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.319 [2024-10-01 13:43:56.805570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.319 [2024-10-01 13:43:56.805695] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.319 [2024-10-01 13:43:56.805728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.319 [2024-10-01 13:43:56.805747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.319 [2024-10-01 13:43:56.805781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.319 [2024-10-01 13:43:56.805814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.319 [2024-10-01 13:43:56.805832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.319 [2024-10-01 13:43:56.805846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.319 [2024-10-01 13:43:56.805878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.319 [2024-10-01 13:43:56.816304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.319 [2024-10-01 13:43:56.816443] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.319 [2024-10-01 13:43:56.816480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.319 [2024-10-01 13:43:56.816499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.319 [2024-10-01 13:43:56.816548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.319 [2024-10-01 13:43:56.816585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.319 [2024-10-01 13:43:56.816605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.319 [2024-10-01 13:43:56.816619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.319 [2024-10-01 13:43:56.816658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.319 [2024-10-01 13:43:56.828043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.319 [2024-10-01 13:43:56.828189] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.319 [2024-10-01 13:43:56.828224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.319 [2024-10-01 13:43:56.828242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.319 [2024-10-01 13:43:56.828287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.319 [2024-10-01 13:43:56.828346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.319 [2024-10-01 13:43:56.828365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.319 [2024-10-01 13:43:56.828380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.319 [2024-10-01 13:43:56.828413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.319 [2024-10-01 13:43:56.838581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.319 [2024-10-01 13:43:56.838721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.319 [2024-10-01 13:43:56.838764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.319 [2024-10-01 13:43:56.838785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.319 [2024-10-01 13:43:56.839725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.319 [2024-10-01 13:43:56.839959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.319 [2024-10-01 13:43:56.839996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.319 [2024-10-01 13:43:56.840013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.319 [2024-10-01 13:43:56.840119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.319 [2024-10-01 13:43:56.849460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.319 [2024-10-01 13:43:56.849612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.319 [2024-10-01 13:43:56.849646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.319 [2024-10-01 13:43:56.849666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.319 [2024-10-01 13:43:56.849701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.319 [2024-10-01 13:43:56.849734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.319 [2024-10-01 13:43:56.849751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.319 [2024-10-01 13:43:56.849765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.319 [2024-10-01 13:43:56.849798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.319 [2024-10-01 13:43:56.859948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.319 [2024-10-01 13:43:56.860091] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.319 [2024-10-01 13:43:56.860125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.319 [2024-10-01 13:43:56.860145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.320 [2024-10-01 13:43:56.860179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.320 [2024-10-01 13:43:56.860212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.320 [2024-10-01 13:43:56.860229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.320 [2024-10-01 13:43:56.860244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.320 [2024-10-01 13:43:56.860307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.320 [2024-10-01 13:43:56.871352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.320 [2024-10-01 13:43:56.871521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.320 [2024-10-01 13:43:56.871578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.320 [2024-10-01 13:43:56.871603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.320 [2024-10-01 13:43:56.871641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.320 [2024-10-01 13:43:56.871674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.320 [2024-10-01 13:43:56.871692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.320 [2024-10-01 13:43:56.871707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.320 [2024-10-01 13:43:56.871742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.320 [2024-10-01 13:43:56.881656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.320 [2024-10-01 13:43:56.881811] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.320 [2024-10-01 13:43:56.881846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.320 [2024-10-01 13:43:56.881867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.320 [2024-10-01 13:43:56.882838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.320 [2024-10-01 13:43:56.883062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.320 [2024-10-01 13:43:56.883100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.320 [2024-10-01 13:43:56.883127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.320 [2024-10-01 13:43:56.883209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.320 [2024-10-01 13:43:56.892952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.320 [2024-10-01 13:43:56.893095] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.320 [2024-10-01 13:43:56.893130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.320 [2024-10-01 13:43:56.893149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.320 [2024-10-01 13:43:56.893194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.320 [2024-10-01 13:43:56.893228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.320 [2024-10-01 13:43:56.893245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.320 [2024-10-01 13:43:56.893261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.320 [2024-10-01 13:43:56.893293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.320 [2024-10-01 13:43:56.903158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.320 [2024-10-01 13:43:56.903342] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.320 [2024-10-01 13:43:56.903378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.320 [2024-10-01 13:43:56.903425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.320 [2024-10-01 13:43:56.903465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.320 [2024-10-01 13:43:56.903499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.320 [2024-10-01 13:43:56.903517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.320 [2024-10-01 13:43:56.903547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.320 [2024-10-01 13:43:56.903586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.320 [2024-10-01 13:43:56.914322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.320 [2024-10-01 13:43:56.914454] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.320 [2024-10-01 13:43:56.914489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.320 [2024-10-01 13:43:56.914508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.320 [2024-10-01 13:43:56.914558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.320 [2024-10-01 13:43:56.914611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.320 [2024-10-01 13:43:56.914634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.320 [2024-10-01 13:43:56.914649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.320 [2024-10-01 13:43:56.914683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.320 [2024-10-01 13:43:56.924549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.320 [2024-10-01 13:43:56.924668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.320 [2024-10-01 13:43:56.924702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.320 [2024-10-01 13:43:56.924721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.320 [2024-10-01 13:43:56.925653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.320 [2024-10-01 13:43:56.925871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.320 [2024-10-01 13:43:56.925899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.320 [2024-10-01 13:43:56.925915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.320 [2024-10-01 13:43:56.925994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.321 [2024-10-01 13:43:56.935454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.321 [2024-10-01 13:43:56.935614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.321 [2024-10-01 13:43:56.935653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.321 [2024-10-01 13:43:56.935672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.321 [2024-10-01 13:43:56.935721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.321 [2024-10-01 13:43:56.935759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.321 [2024-10-01 13:43:56.935803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.321 [2024-10-01 13:43:56.935819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.321 [2024-10-01 13:43:56.935853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.321 [2024-10-01 13:43:56.945620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.321 [2024-10-01 13:43:56.945760] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.321 [2024-10-01 13:43:56.945793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.321 [2024-10-01 13:43:56.945812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.321 [2024-10-01 13:43:56.945846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.321 [2024-10-01 13:43:56.945878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.321 [2024-10-01 13:43:56.945896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.321 [2024-10-01 13:43:56.945910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.321 [2024-10-01 13:43:56.945942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.321 [2024-10-01 13:43:56.956757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.321 [2024-10-01 13:43:56.956896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.321 [2024-10-01 13:43:56.956930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.321 [2024-10-01 13:43:56.956948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.321 [2024-10-01 13:43:56.956982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.321 [2024-10-01 13:43:56.957030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.321 [2024-10-01 13:43:56.957052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.321 [2024-10-01 13:43:56.957067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.321 [2024-10-01 13:43:56.957100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.321 [2024-10-01 13:43:56.966944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.321 [2024-10-01 13:43:56.967073] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.321 [2024-10-01 13:43:56.967107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.321 [2024-10-01 13:43:56.967126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.321 [2024-10-01 13:43:56.967160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.321 [2024-10-01 13:43:56.968129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.321 [2024-10-01 13:43:56.968171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.321 [2024-10-01 13:43:56.968191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.321 [2024-10-01 13:43:56.968399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.321 [2024-10-01 13:43:56.977844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.321 [2024-10-01 13:43:56.977979] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.321 [2024-10-01 13:43:56.978013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.321 [2024-10-01 13:43:56.978032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.321 [2024-10-01 13:43:56.978081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.321 [2024-10-01 13:43:56.978118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.321 [2024-10-01 13:43:56.978136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.321 [2024-10-01 13:43:56.978151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.321 [2024-10-01 13:43:56.978183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.321 [2024-10-01 13:43:56.988034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.321 [2024-10-01 13:43:56.988172] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.321 [2024-10-01 13:43:56.988206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.321 [2024-10-01 13:43:56.988224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.321 [2024-10-01 13:43:56.988259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.321 [2024-10-01 13:43:56.988292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.321 [2024-10-01 13:43:56.988310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.321 [2024-10-01 13:43:56.988325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.321 [2024-10-01 13:43:56.988357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.321 [2024-10-01 13:43:56.999150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.321 [2024-10-01 13:43:56.999285] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.321 [2024-10-01 13:43:56.999319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.321 [2024-10-01 13:43:56.999343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.321 [2024-10-01 13:43:56.999386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.321 [2024-10-01 13:43:56.999436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.321 [2024-10-01 13:43:56.999459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.321 [2024-10-01 13:43:56.999474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.321 [2024-10-01 13:43:56.999506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.321 [2024-10-01 13:43:57.009373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.321 [2024-10-01 13:43:57.009497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.321 [2024-10-01 13:43:57.009530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.322 [2024-10-01 13:43:57.009568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.322 [2024-10-01 13:43:57.010522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.322 [2024-10-01 13:43:57.010761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.322 [2024-10-01 13:43:57.010800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.322 [2024-10-01 13:43:57.010818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.322 [2024-10-01 13:43:57.010898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.322 [2024-10-01 13:43:57.020413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.322 [2024-10-01 13:43:57.020615] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.322 [2024-10-01 13:43:57.020653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.322 [2024-10-01 13:43:57.020673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.322 [2024-10-01 13:43:57.020710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.322 [2024-10-01 13:43:57.020744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.322 [2024-10-01 13:43:57.020761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.322 [2024-10-01 13:43:57.020777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.322 [2024-10-01 13:43:57.020810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.322 [2024-10-01 13:43:57.030567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.322 [2024-10-01 13:43:57.030704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.322 [2024-10-01 13:43:57.030738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.322 [2024-10-01 13:43:57.030757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.322 [2024-10-01 13:43:57.030792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.322 [2024-10-01 13:43:57.030825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.322 [2024-10-01 13:43:57.030843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.322 [2024-10-01 13:43:57.030857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.322 [2024-10-01 13:43:57.030889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.322 [2024-10-01 13:43:57.041727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.322 [2024-10-01 13:43:57.041860] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.322 [2024-10-01 13:43:57.041893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.322 [2024-10-01 13:43:57.041912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.322 [2024-10-01 13:43:57.041947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.322 [2024-10-01 13:43:57.041995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.322 [2024-10-01 13:43:57.042017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.322 [2024-10-01 13:43:57.042061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.322 [2024-10-01 13:43:57.042096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.322 [2024-10-01 13:43:57.051928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.322 [2024-10-01 13:43:57.052051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.322 [2024-10-01 13:43:57.052084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.322 [2024-10-01 13:43:57.052103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.322 [2024-10-01 13:43:57.052137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.322 [2024-10-01 13:43:57.053065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.322 [2024-10-01 13:43:57.053105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.322 [2024-10-01 13:43:57.053123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.322 [2024-10-01 13:43:57.053311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.322 [2024-10-01 13:43:57.062744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.322 [2024-10-01 13:43:57.062867] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.322 [2024-10-01 13:43:57.062900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.322 [2024-10-01 13:43:57.062918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.322 [2024-10-01 13:43:57.062965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.322 [2024-10-01 13:43:57.063002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.322 [2024-10-01 13:43:57.063021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.322 [2024-10-01 13:43:57.063035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.322 [2024-10-01 13:43:57.063067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.322 [2024-10-01 13:43:57.072921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.322 [2024-10-01 13:43:57.073043] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.322 [2024-10-01 13:43:57.073077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.322 [2024-10-01 13:43:57.073096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.322 [2024-10-01 13:43:57.073129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.322 [2024-10-01 13:43:57.073168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.322 [2024-10-01 13:43:57.073190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.323 [2024-10-01 13:43:57.073204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.323 [2024-10-01 13:43:57.073236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.323 [2024-10-01 13:43:57.084191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.323 [2024-10-01 13:43:57.084394] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.323 [2024-10-01 13:43:57.084430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.323 [2024-10-01 13:43:57.084449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.323 [2024-10-01 13:43:57.084483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.323 [2024-10-01 13:43:57.084516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.323 [2024-10-01 13:43:57.084548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.323 [2024-10-01 13:43:57.084567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.323 [2024-10-01 13:43:57.084600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.323 [2024-10-01 13:43:57.094641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.323 [2024-10-01 13:43:57.094843] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.323 [2024-10-01 13:43:57.094881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.323 [2024-10-01 13:43:57.094902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.323 [2024-10-01 13:43:57.095855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.323 [2024-10-01 13:43:57.096100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.323 [2024-10-01 13:43:57.096139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.323 [2024-10-01 13:43:57.096158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.323 [2024-10-01 13:43:57.096252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.323 [2024-10-01 13:43:57.105990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.323 [2024-10-01 13:43:57.106148] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.323 [2024-10-01 13:43:57.106186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.323 [2024-10-01 13:43:57.106206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.323 [2024-10-01 13:43:57.106241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.323 [2024-10-01 13:43:57.106275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.323 [2024-10-01 13:43:57.106293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.323 [2024-10-01 13:43:57.106309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.323 [2024-10-01 13:43:57.106362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.323 [2024-10-01 13:43:57.116504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.323 [2024-10-01 13:43:57.116723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.323 [2024-10-01 13:43:57.116761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.323 [2024-10-01 13:43:57.116782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.323 [2024-10-01 13:43:57.116818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.323 [2024-10-01 13:43:57.116894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.323 [2024-10-01 13:43:57.116914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.323 [2024-10-01 13:43:57.116930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.323 [2024-10-01 13:43:57.116963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.323 [2024-10-01 13:43:57.128121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.323 [2024-10-01 13:43:57.128324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.323 [2024-10-01 13:43:57.128361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.323 [2024-10-01 13:43:57.128381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.323 [2024-10-01 13:43:57.128417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.323 [2024-10-01 13:43:57.128451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.323 [2024-10-01 13:43:57.128469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.323 [2024-10-01 13:43:57.128485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.323 [2024-10-01 13:43:57.128518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.323 [2024-10-01 13:43:57.138654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.323 [2024-10-01 13:43:57.138814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.323 [2024-10-01 13:43:57.138851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.323 [2024-10-01 13:43:57.138871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.323 [2024-10-01 13:43:57.139822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.323 [2024-10-01 13:43:57.140061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.323 [2024-10-01 13:43:57.140101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.323 [2024-10-01 13:43:57.140121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.323 [2024-10-01 13:43:57.140202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.323 [2024-10-01 13:43:57.150868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.323 [2024-10-01 13:43:57.151185] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.323 [2024-10-01 13:43:57.151252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.323 [2024-10-01 13:43:57.151295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.323 [2024-10-01 13:43:57.152853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.323 [2024-10-01 13:43:57.153231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.323 [2024-10-01 13:43:57.153292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.323 [2024-10-01 13:43:57.153327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.323 [2024-10-01 13:43:57.154525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.323 [2024-10-01 13:43:57.161018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.323 [2024-10-01 13:43:57.161195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.323 [2024-10-01 13:43:57.161260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.323 [2024-10-01 13:43:57.161296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.323 [2024-10-01 13:43:57.162347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.323 [2024-10-01 13:43:57.162700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.323 [2024-10-01 13:43:57.162764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.323 [2024-10-01 13:43:57.162800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.323 [2024-10-01 13:43:57.164046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.323 [2024-10-01 13:43:57.173822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.323 [2024-10-01 13:43:57.175067] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.323 [2024-10-01 13:43:57.175137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.323 [2024-10-01 13:43:57.175175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.323 [2024-10-01 13:43:57.175420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.323 [2024-10-01 13:43:57.175502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.323 [2024-10-01 13:43:57.175560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.323 [2024-10-01 13:43:57.175593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.323 [2024-10-01 13:43:57.175651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.323 [2024-10-01 13:43:57.188638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.324 [2024-10-01 13:43:57.190055] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.324 [2024-10-01 13:43:57.190130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.324 [2024-10-01 13:43:57.190167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.324 [2024-10-01 13:43:57.191354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.324 [2024-10-01 13:43:57.191702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.324 [2024-10-01 13:43:57.191759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.324 [2024-10-01 13:43:57.191790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.324 [2024-10-01 13:43:57.193075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.324 [2024-10-01 13:43:57.200848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.324 [2024-10-01 13:43:57.202499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.324 [2024-10-01 13:43:57.202588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.324 [2024-10-01 13:43:57.202664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.324 [2024-10-01 13:43:57.203804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.324 [2024-10-01 13:43:57.204030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.324 [2024-10-01 13:43:57.204089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.324 [2024-10-01 13:43:57.204122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.324 [2024-10-01 13:43:57.204184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.324 [2024-10-01 13:43:57.214768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.324 [2024-10-01 13:43:57.215167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.324 [2024-10-01 13:43:57.215241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.324 [2024-10-01 13:43:57.215279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.324 [2024-10-01 13:43:57.216856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.324 [2024-10-01 13:43:57.218081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.324 [2024-10-01 13:43:57.218145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.324 [2024-10-01 13:43:57.218180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.324 [2024-10-01 13:43:57.218357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.324 [2024-10-01 13:43:57.228192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.324 [2024-10-01 13:43:57.229502] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.324 [2024-10-01 13:43:57.229589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.324 [2024-10-01 13:43:57.229630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.324 [2024-10-01 13:43:57.231337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.324 [2024-10-01 13:43:57.232426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.324 [2024-10-01 13:43:57.232477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.324 [2024-10-01 13:43:57.232500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.324 [2024-10-01 13:43:57.232644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.324 [2024-10-01 13:43:57.238945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.324 [2024-10-01 13:43:57.239076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.324 [2024-10-01 13:43:57.239111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.324 [2024-10-01 13:43:57.239130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.324 [2024-10-01 13:43:57.239169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.324 [2024-10-01 13:43:57.240115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.324 [2024-10-01 13:43:57.240185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.324 [2024-10-01 13:43:57.240205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.324 [2024-10-01 13:43:57.240440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.324 [2024-10-01 13:43:57.249951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.324 [2024-10-01 13:43:57.250080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.324 [2024-10-01 13:43:57.250114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.324 [2024-10-01 13:43:57.250132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.324 [2024-10-01 13:43:57.250170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.324 [2024-10-01 13:43:57.250208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.324 [2024-10-01 13:43:57.250226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.324 [2024-10-01 13:43:57.250240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.324 [2024-10-01 13:43:57.250277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.324 [2024-10-01 13:43:57.260178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.324 [2024-10-01 13:43:57.260304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.324 [2024-10-01 13:43:57.260337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.324 [2024-10-01 13:43:57.260356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.324 [2024-10-01 13:43:57.260393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.324 [2024-10-01 13:43:57.260430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.324 [2024-10-01 13:43:57.260449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.324 [2024-10-01 13:43:57.260463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.324 [2024-10-01 13:43:57.260499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.324 [2024-10-01 13:43:57.271444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.324 [2024-10-01 13:43:57.271636] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.324 [2024-10-01 13:43:57.271675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.324 [2024-10-01 13:43:57.271695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.324 [2024-10-01 13:43:57.271737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.324 [2024-10-01 13:43:57.271775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.324 [2024-10-01 13:43:57.271792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.324 [2024-10-01 13:43:57.271808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.324 [2024-10-01 13:43:57.271847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.324 [2024-10-01 13:43:57.281997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.324 [2024-10-01 13:43:57.282192] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.324 [2024-10-01 13:43:57.282230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.324 [2024-10-01 13:43:57.282250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.324 [2024-10-01 13:43:57.283204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.324 [2024-10-01 13:43:57.283476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.324 [2024-10-01 13:43:57.283515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.324 [2024-10-01 13:43:57.283549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.324 [2024-10-01 13:43:57.283641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.324 [2024-10-01 13:43:57.293093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.324 [2024-10-01 13:43:57.293228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.324 [2024-10-01 13:43:57.293263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.324 [2024-10-01 13:43:57.293282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.324 [2024-10-01 13:43:57.293321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.324 [2024-10-01 13:43:57.293359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.324 [2024-10-01 13:43:57.293377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.324 [2024-10-01 13:43:57.293392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.324 [2024-10-01 13:43:57.293428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.324 [2024-10-01 13:43:57.303280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.324 [2024-10-01 13:43:57.303410] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.324 [2024-10-01 13:43:57.303444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.324 [2024-10-01 13:43:57.303462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.324 [2024-10-01 13:43:57.303500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.324 [2024-10-01 13:43:57.303553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.324 [2024-10-01 13:43:57.303575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.324 [2024-10-01 13:43:57.303590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.324 [2024-10-01 13:43:57.303627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.324 [2024-10-01 13:43:57.314628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.324 [2024-10-01 13:43:57.314764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.325 [2024-10-01 13:43:57.314797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.325 [2024-10-01 13:43:57.314816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.325 [2024-10-01 13:43:57.314878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.325 [2024-10-01 13:43:57.314917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.325 [2024-10-01 13:43:57.314935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.325 [2024-10-01 13:43:57.314949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.325 [2024-10-01 13:43:57.314985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.325 [2024-10-01 13:43:57.325439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.325 [2024-10-01 13:43:57.325580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.325 [2024-10-01 13:43:57.325622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.325 [2024-10-01 13:43:57.325640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.325 [2024-10-01 13:43:57.325678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.325 [2024-10-01 13:43:57.326614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.325 [2024-10-01 13:43:57.326654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.325 [2024-10-01 13:43:57.326671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.325 [2024-10-01 13:43:57.326912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.325 [2024-10-01 13:43:57.336280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.325 [2024-10-01 13:43:57.336403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.325 [2024-10-01 13:43:57.336436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.325 [2024-10-01 13:43:57.336455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.325 [2024-10-01 13:43:57.336493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.325 [2024-10-01 13:43:57.336530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.325 [2024-10-01 13:43:57.336564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.325 [2024-10-01 13:43:57.336580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.325 [2024-10-01 13:43:57.336618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.325 [2024-10-01 13:43:57.346404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.325 [2024-10-01 13:43:57.346527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.325 [2024-10-01 13:43:57.346576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.325 [2024-10-01 13:43:57.346595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.325 [2024-10-01 13:43:57.346634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.325 [2024-10-01 13:43:57.346670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.325 [2024-10-01 13:43:57.346688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.325 [2024-10-01 13:43:57.346726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.325 [2024-10-01 13:43:57.346766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.325 [2024-10-01 13:43:57.357939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.325 [2024-10-01 13:43:57.358074] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.325 [2024-10-01 13:43:57.358108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.325 [2024-10-01 13:43:57.358127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.325 [2024-10-01 13:43:57.358165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.325 [2024-10-01 13:43:57.358202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.325 [2024-10-01 13:43:57.358220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.325 [2024-10-01 13:43:57.358234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.325 [2024-10-01 13:43:57.358270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.325 [2024-10-01 13:43:57.368174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.325 [2024-10-01 13:43:57.368310] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.325 [2024-10-01 13:43:57.368344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.325 [2024-10-01 13:43:57.368363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.325 [2024-10-01 13:43:57.368418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.325 [2024-10-01 13:43:57.369361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.325 [2024-10-01 13:43:57.369401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.325 [2024-10-01 13:43:57.369420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.325 [2024-10-01 13:43:57.369628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.325 [2024-10-01 13:43:57.379166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.325 [2024-10-01 13:43:57.379317] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.325 [2024-10-01 13:43:57.379351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.325 [2024-10-01 13:43:57.379370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.325 [2024-10-01 13:43:57.379408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.325 [2024-10-01 13:43:57.379445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.325 [2024-10-01 13:43:57.379463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.325 [2024-10-01 13:43:57.379477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.325 [2024-10-01 13:43:57.379514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.325 [2024-10-01 13:43:57.389664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.325 [2024-10-01 13:43:57.389894] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.325 [2024-10-01 13:43:57.389931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.325 [2024-10-01 13:43:57.389950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.325 [2024-10-01 13:43:57.389990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.325 [2024-10-01 13:43:57.390028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.325 [2024-10-01 13:43:57.390046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.325 [2024-10-01 13:43:57.390062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.325 [2024-10-01 13:43:57.390100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.325 [2024-10-01 13:43:57.400929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.325 [2024-10-01 13:43:57.401103] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.325 [2024-10-01 13:43:57.401140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.325 [2024-10-01 13:43:57.401159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.325 [2024-10-01 13:43:57.401215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.325 [2024-10-01 13:43:57.401257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.325 [2024-10-01 13:43:57.401275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.325 [2024-10-01 13:43:57.401291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.325 [2024-10-01 13:43:57.401328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.325 [2024-10-01 13:43:57.411095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.325 [2024-10-01 13:43:57.411222] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.325 [2024-10-01 13:43:57.411255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.325 [2024-10-01 13:43:57.411274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.325 [2024-10-01 13:43:57.411312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.325 [2024-10-01 13:43:57.412259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.325 [2024-10-01 13:43:57.412300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.325 [2024-10-01 13:43:57.412319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.325 [2024-10-01 13:43:57.412525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.325 [2024-10-01 13:43:57.421924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.325 [2024-10-01 13:43:57.422050] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.325 [2024-10-01 13:43:57.422083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.325 [2024-10-01 13:43:57.422101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.325 [2024-10-01 13:43:57.422154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.325 [2024-10-01 13:43:57.422223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.325 [2024-10-01 13:43:57.422244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.325 [2024-10-01 13:43:57.422258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.325 [2024-10-01 13:43:57.422294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.325 [2024-10-01 13:43:57.432203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.325 [2024-10-01 13:43:57.432332] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.325 [2024-10-01 13:43:57.432366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.325 [2024-10-01 13:43:57.432384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.325 [2024-10-01 13:43:57.432423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.325 [2024-10-01 13:43:57.432460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.325 [2024-10-01 13:43:57.432477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.325 [2024-10-01 13:43:57.432492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.325 [2024-10-01 13:43:57.432527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.325 [2024-10-01 13:43:57.443364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.325 [2024-10-01 13:43:57.443496] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.325 [2024-10-01 13:43:57.443530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.326 [2024-10-01 13:43:57.443566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.326 [2024-10-01 13:43:57.443606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.326 [2024-10-01 13:43:57.443660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.326 [2024-10-01 13:43:57.443682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.326 [2024-10-01 13:43:57.443697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.326 [2024-10-01 13:43:57.443734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.326 [2024-10-01 13:43:57.453681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.326 [2024-10-01 13:43:57.453806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.326 [2024-10-01 13:43:57.453839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.326 [2024-10-01 13:43:57.453857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.326 [2024-10-01 13:43:57.453895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.326 [2024-10-01 13:43:57.454837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.326 [2024-10-01 13:43:57.454876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.326 [2024-10-01 13:43:57.454895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.326 [2024-10-01 13:43:57.455124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.326 [2024-10-01 13:43:57.464672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.326 [2024-10-01 13:43:57.464795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.326 [2024-10-01 13:43:57.464828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.326 [2024-10-01 13:43:57.464846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.326 [2024-10-01 13:43:57.464884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.326 [2024-10-01 13:43:57.464921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.326 [2024-10-01 13:43:57.464939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.326 [2024-10-01 13:43:57.464953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.326 [2024-10-01 13:43:57.464989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.326 [2024-10-01 13:43:57.475163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.326 [2024-10-01 13:43:57.475289] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.326 [2024-10-01 13:43:57.475322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.326 [2024-10-01 13:43:57.475340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.326 [2024-10-01 13:43:57.475378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.326 [2024-10-01 13:43:57.475415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.326 [2024-10-01 13:43:57.475433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.326 [2024-10-01 13:43:57.475448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.326 [2024-10-01 13:43:57.475484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.326 [2024-10-01 13:43:57.486328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.326 [2024-10-01 13:43:57.486625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.326 [2024-10-01 13:43:57.486670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.326 [2024-10-01 13:43:57.486692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.326 [2024-10-01 13:43:57.486756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.326 [2024-10-01 13:43:57.486799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.326 [2024-10-01 13:43:57.486817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.326 [2024-10-01 13:43:57.486832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.326 [2024-10-01 13:43:57.486870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.326 [2024-10-01 13:43:57.496672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.326 [2024-10-01 13:43:57.496800] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.326 [2024-10-01 13:43:57.496834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.326 [2024-10-01 13:43:57.496883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.326 [2024-10-01 13:43:57.497848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.326 [2024-10-01 13:43:57.498073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.326 [2024-10-01 13:43:57.498110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.326 [2024-10-01 13:43:57.498128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.326 [2024-10-01 13:43:57.498213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.326 [2024-10-01 13:43:57.507509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.326 [2024-10-01 13:43:57.507671] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.326 [2024-10-01 13:43:57.507713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.326 [2024-10-01 13:43:57.507732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.326 [2024-10-01 13:43:57.507771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.331 [2024-10-01 13:43:57.507808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.331 [2024-10-01 13:43:57.507825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.331 [2024-10-01 13:43:57.507841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.331 [2024-10-01 13:43:57.507893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.331 [2024-10-01 13:43:57.514264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.331 [2024-10-01 13:43:57.514312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.331 [2024-10-01 13:43:57.514359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.331 [2024-10-01 13:43:57.514391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.331 [2024-10-01 13:43:57.514421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.331 [2024-10-01 13:43:57.514452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.331 [2024-10-01 13:43:57.514483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.331 [2024-10-01 13:43:57.514555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.331 [2024-10-01 13:43:57.514590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.514622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.514654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.514685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.514716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.514747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.514780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.514811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.514841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.514872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.514903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.514934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.514977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.514993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.515008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.515025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.515040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.515056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.515071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.515087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.331 [2024-10-01 13:43:57.515101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.331 [2024-10-01 13:43:57.515118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.332 [2024-10-01 13:43:57.515897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.332 [2024-10-01 13:43:57.515930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.332 [2024-10-01 13:43:57.515961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.515977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.332 [2024-10-01 13:43:57.515992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.516008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.332 [2024-10-01 13:43:57.516023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.516039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.332 [2024-10-01 13:43:57.516054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.332 [2024-10-01 13:43:57.516071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.333 [2024-10-01 13:43:57.516087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.333 [2024-10-01 13:43:57.516104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.333 [2024-10-01 13:43:57.516118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.333 [2024-10-01 13:43:57.516135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.333 [2024-10-01 13:43:57.516149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.333 [2024-10-01 13:43:57.516165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.333 [2024-10-01 13:43:57.516189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.333 [2024-10-01 13:43:57.516207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.333 [2024-10-01 13:43:57.516221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.333 [2024-10-01 13:43:57.516238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.333 [2024-10-01 13:43:57.516252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.333 [2024-10-01 13:43:57.516269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.333 [2024-10-01 13:43:57.516284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.333 [2024-10-01 13:43:57.516300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.333 [2024-10-01 13:43:57.516315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.333 [2024-10-01 13:43:57.516331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.333 [2024-10-01 13:43:57.516346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.333 [2024-10-01 13:43:57.516362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.333 [2024-10-01 13:43:57.516377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.333 [2024-10-01 13:43:57.516404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.333 [2024-10-01 13:43:57.516418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.333 [2024-10-01 13:43:57.516434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.333 [2024-10-01 13:43:57.516449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.333 [2024-10-01 13:43:57.516466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.333 [2024-10-01 13:43:57.516480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.333 [2024-10-01 13:43:57.516496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.333 [2024-10-01 13:43:57.516511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.333 [2024-10-01 13:43:57.516527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.516555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.516574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.516589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.516607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.516630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.516648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.516663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.516680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.516694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.516710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.334 [2024-10-01 13:43:57.516725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.516741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.334 [2024-10-01 13:43:57.516756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.516772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.334 [2024-10-01 13:43:57.516787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.516803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.334 [2024-10-01 13:43:57.516818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.516834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.334 [2024-10-01 13:43:57.516849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.516865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.334 [2024-10-01 13:43:57.516879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.516895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.334 [2024-10-01 13:43:57.516909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.516926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.334 [2024-10-01 13:43:57.516940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.516956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.516971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.516987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.517002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.517026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.517041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.517058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.517072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.517088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.517103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.517120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.517134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.517151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.517165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.517181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.517195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.517212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.517226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.517242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.517256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.517272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.517287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.517303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.517317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.517334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.334 [2024-10-01 13:43:57.517348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.334 [2024-10-01 13:43:57.517364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.335 [2024-10-01 13:43:57.517379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.335 [2024-10-01 13:43:57.517417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.335 [2024-10-01 13:43:57.517450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.335 [2024-10-01 13:43:57.517481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.335 [2024-10-01 13:43:57.517511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.335 [2024-10-01 13:43:57.517568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.335 [2024-10-01 13:43:57.517617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.335 [2024-10-01 13:43:57.517648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.335 [2024-10-01 13:43:57.517680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.335 [2024-10-01 13:43:57.517710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.335 [2024-10-01 13:43:57.517741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.335 [2024-10-01 13:43:57.517771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.335 [2024-10-01 13:43:57.517801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.335 [2024-10-01 13:43:57.517832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.335 [2024-10-01 13:43:57.517880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.335 [2024-10-01 13:43:57.517912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.335 [2024-10-01 13:43:57.517943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.335 [2024-10-01 13:43:57.517975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.517991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.335 [2024-10-01 13:43:57.518006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.518023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.335 [2024-10-01 13:43:57.518037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.518053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.335 [2024-10-01 13:43:57.518068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.518084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.335 [2024-10-01 13:43:57.518099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.518114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb42020 is same with the state(6) to be set 00:16:17.335 [2024-10-01 13:43:57.518131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.335 [2024-10-01 13:43:57.518143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.335 [2024-10-01 13:43:57.518155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62224 len:8 PRP1 0x0 PRP2 0x0 00:16:17.335 [2024-10-01 13:43:57.518169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.335 [2024-10-01 13:43:57.518185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.335 [2024-10-01 13:43:57.518195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.336 [2024-10-01 13:43:57.518206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62776 len:8 PRP1 0x0 PRP2 0x0 00:16:17.336 [2024-10-01 13:43:57.518220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.336 [2024-10-01 13:43:57.518235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.336 [2024-10-01 13:43:57.518245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.336 [2024-10-01 13:43:57.518257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62784 len:8 PRP1 0x0 PRP2 0x0 00:16:17.336 [2024-10-01 13:43:57.518278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.336 [2024-10-01 13:43:57.518294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.336 [2024-10-01 13:43:57.518304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.336 [2024-10-01 13:43:57.518315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62792 len:8 PRP1 0x0 PRP2 0x0 00:16:17.336 [2024-10-01 13:43:57.518329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.336 [2024-10-01 13:43:57.518346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.336 [2024-10-01 13:43:57.518357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.336 [2024-10-01 13:43:57.518368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62800 len:8 PRP1 0x0 PRP2 0x0 00:16:17.336 [2024-10-01 13:43:57.518381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.336 [2024-10-01 13:43:57.518396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.336 [2024-10-01 13:43:57.518406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.336 [2024-10-01 13:43:57.518417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62808 len:8 PRP1 0x0 PRP2 0x0 00:16:17.336 [2024-10-01 13:43:57.518430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.336 [2024-10-01 13:43:57.518445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.336 [2024-10-01 13:43:57.518455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.336 [2024-10-01 13:43:57.518466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62816 len:8 PRP1 0x0 PRP2 0x0 00:16:17.336 [2024-10-01 13:43:57.518479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.336 [2024-10-01 13:43:57.518494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.336 [2024-10-01 13:43:57.518504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.336 [2024-10-01 13:43:57.518515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62824 len:8 PRP1 0x0 PRP2 0x0 00:16:17.336 [2024-10-01 13:43:57.518528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.336 [2024-10-01 13:43:57.518561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.336 [2024-10-01 13:43:57.518573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.336 [2024-10-01 13:43:57.518584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62832 len:8 PRP1 0x0 PRP2 0x0 00:16:17.336 [2024-10-01 13:43:57.518598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.336 [2024-10-01 13:43:57.518613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.336 [2024-10-01 13:43:57.518623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.336 [2024-10-01 13:43:57.518634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62840 len:8 PRP1 0x0 PRP2 0x0 00:16:17.336 [2024-10-01 13:43:57.518647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.336 [2024-10-01 13:43:57.518662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.336 [2024-10-01 13:43:57.518680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.336 [2024-10-01 13:43:57.518692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62848 len:8 PRP1 0x0 PRP2 0x0 00:16:17.336 [2024-10-01 13:43:57.518706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.336 [2024-10-01 13:43:57.518721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.336 [2024-10-01 13:43:57.518732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.336 [2024-10-01 13:43:57.518742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62856 len:8 PRP1 0x0 PRP2 0x0 00:16:17.336 [2024-10-01 13:43:57.518756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.336 [2024-10-01 13:43:57.518773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:17.336 [2024-10-01 13:43:57.518785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:17.336 [2024-10-01 13:43:57.518795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62864 len:8 PRP1 0x0 PRP2 0x0 00:16:17.336 [2024-10-01 13:43:57.518809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.336 [2024-10-01 13:43:57.518857] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb42020 was disconnected and freed. reset controller. 00:16:17.336 [2024-10-01 13:43:57.519986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.336 [2024-10-01 13:43:57.520060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.336 [2024-10-01 13:43:57.520083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.336 [2024-10-01 13:43:57.520107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.336 [2024-10-01 13:43:57.520303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.336 [2024-10-01 13:43:57.520531] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.336 [2024-10-01 13:43:57.520578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.336 [2024-10-01 13:43:57.520597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.336 [2024-10-01 13:43:57.520650] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.336 [2024-10-01 13:43:57.520675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.336 [2024-10-01 13:43:57.520691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.336 [2024-10-01 13:43:57.520762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.337 [2024-10-01 13:43:57.520791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.337 [2024-10-01 13:43:57.520833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.337 [2024-10-01 13:43:57.520853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.337 [2024-10-01 13:43:57.520869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.337 [2024-10-01 13:43:57.520886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.337 [2024-10-01 13:43:57.520902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.337 [2024-10-01 13:43:57.520930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.337 [2024-10-01 13:43:57.520966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.337 [2024-10-01 13:43:57.520987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.337 [2024-10-01 13:43:57.530400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.337 [2024-10-01 13:43:57.530466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.337 [2024-10-01 13:43:57.530568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.337 [2024-10-01 13:43:57.530600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.337 [2024-10-01 13:43:57.530618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.337 [2024-10-01 13:43:57.530687] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.337 [2024-10-01 13:43:57.530716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.337 [2024-10-01 13:43:57.530733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.337 [2024-10-01 13:43:57.530753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.337 [2024-10-01 13:43:57.530787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.337 [2024-10-01 13:43:57.530808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.337 [2024-10-01 13:43:57.530823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.337 [2024-10-01 13:43:57.530837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.337 [2024-10-01 13:43:57.532099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.337 [2024-10-01 13:43:57.532130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.337 [2024-10-01 13:43:57.532153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.337 [2024-10-01 13:43:57.532167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.337 [2024-10-01 13:43:57.532396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.337 [2024-10-01 13:43:57.540488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.337 [2024-10-01 13:43:57.540639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.337 [2024-10-01 13:43:57.540687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.337 [2024-10-01 13:43:57.540709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.337 [2024-10-01 13:43:57.541575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.337 [2024-10-01 13:43:57.541820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.337 [2024-10-01 13:43:57.541874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.337 [2024-10-01 13:43:57.541896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.337 [2024-10-01 13:43:57.541910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.337 [2024-10-01 13:43:57.542949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.337 [2024-10-01 13:43:57.543041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.337 [2024-10-01 13:43:57.543072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.337 [2024-10-01 13:43:57.543091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.337 [2024-10-01 13:43:57.543732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.337 [2024-10-01 13:43:57.543845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.337 [2024-10-01 13:43:57.543891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.337 [2024-10-01 13:43:57.543912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.337 [2024-10-01 13:43:57.543949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.337 [2024-10-01 13:43:57.550968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.337 [2024-10-01 13:43:57.551094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.337 [2024-10-01 13:43:57.551128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.337 [2024-10-01 13:43:57.551147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.337 [2024-10-01 13:43:57.551181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.337 [2024-10-01 13:43:57.551213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.337 [2024-10-01 13:43:57.551231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.337 [2024-10-01 13:43:57.551245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.337 [2024-10-01 13:43:57.551277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.337 [2024-10-01 13:43:57.554022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.337 [2024-10-01 13:43:57.554145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.337 [2024-10-01 13:43:57.554179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.337 [2024-10-01 13:43:57.554198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.337 [2024-10-01 13:43:57.554231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.337 [2024-10-01 13:43:57.554264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.337 [2024-10-01 13:43:57.554282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.337 [2024-10-01 13:43:57.554296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.337 [2024-10-01 13:43:57.554329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.337 [2024-10-01 13:43:57.561428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.337 [2024-10-01 13:43:57.561576] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.337 [2024-10-01 13:43:57.561614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.338 [2024-10-01 13:43:57.561633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.338 [2024-10-01 13:43:57.562612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.338 [2024-10-01 13:43:57.562855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.338 [2024-10-01 13:43:57.562894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.338 [2024-10-01 13:43:57.562912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.338 [2024-10-01 13:43:57.562993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.338 [2024-10-01 13:43:57.565432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.338 [2024-10-01 13:43:57.565587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.338 [2024-10-01 13:43:57.565625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.338 [2024-10-01 13:43:57.565644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.338 [2024-10-01 13:43:57.565698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.338 [2024-10-01 13:43:57.565737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.338 [2024-10-01 13:43:57.565756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.338 [2024-10-01 13:43:57.565770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.338 [2024-10-01 13:43:57.565803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.338 [2024-10-01 13:43:57.572363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.338 [2024-10-01 13:43:57.572487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.338 [2024-10-01 13:43:57.572521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.338 [2024-10-01 13:43:57.572555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.338 [2024-10-01 13:43:57.572592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.338 [2024-10-01 13:43:57.572624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.338 [2024-10-01 13:43:57.572642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.338 [2024-10-01 13:43:57.572657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.338 [2024-10-01 13:43:57.572701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.338 [2024-10-01 13:43:57.575662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.338 [2024-10-01 13:43:57.575781] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.338 [2024-10-01 13:43:57.575814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.338 [2024-10-01 13:43:57.575832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.338 [2024-10-01 13:43:57.575866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.338 [2024-10-01 13:43:57.576810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.338 [2024-10-01 13:43:57.576851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.338 [2024-10-01 13:43:57.576891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.338 [2024-10-01 13:43:57.577097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.338 [2024-10-01 13:43:57.582588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.338 [2024-10-01 13:43:57.582708] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.338 [2024-10-01 13:43:57.582742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.338 [2024-10-01 13:43:57.582760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.338 [2024-10-01 13:43:57.582806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.338 [2024-10-01 13:43:57.582841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.338 [2024-10-01 13:43:57.582858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.338 [2024-10-01 13:43:57.582872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.338 [2024-10-01 13:43:57.582904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.338 [2024-10-01 13:43:57.586475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.338 [2024-10-01 13:43:57.586614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.338 [2024-10-01 13:43:57.586655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.338 [2024-10-01 13:43:57.586673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.338 [2024-10-01 13:43:57.586723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.338 [2024-10-01 13:43:57.586760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.338 [2024-10-01 13:43:57.586779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.338 [2024-10-01 13:43:57.586794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.338 [2024-10-01 13:43:57.586826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.338 [2024-10-01 13:43:57.593712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.338 [2024-10-01 13:43:57.593836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.338 [2024-10-01 13:43:57.593870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.338 [2024-10-01 13:43:57.593888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.338 [2024-10-01 13:43:57.593922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.338 [2024-10-01 13:43:57.593955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.338 [2024-10-01 13:43:57.593972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.338 [2024-10-01 13:43:57.593987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.338 [2024-10-01 13:43:57.594019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.338 [2024-10-01 13:43:57.596690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.338 [2024-10-01 13:43:57.596809] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.338 [2024-10-01 13:43:57.596857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.338 [2024-10-01 13:43:57.596878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.338 [2024-10-01 13:43:57.596913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.338 [2024-10-01 13:43:57.596945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.338 [2024-10-01 13:43:57.596963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.338 [2024-10-01 13:43:57.596977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.338 [2024-10-01 13:43:57.597009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.338 [2024-10-01 13:43:57.604124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.338 [2024-10-01 13:43:57.604244] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.339 [2024-10-01 13:43:57.604278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.339 [2024-10-01 13:43:57.604296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.339 [2024-10-01 13:43:57.604330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.339 [2024-10-01 13:43:57.604369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.339 [2024-10-01 13:43:57.604387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.339 [2024-10-01 13:43:57.604401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.339 [2024-10-01 13:43:57.605329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.339 [2024-10-01 13:43:57.608113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.339 [2024-10-01 13:43:57.608261] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.339 [2024-10-01 13:43:57.608294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.339 [2024-10-01 13:43:57.608313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.339 [2024-10-01 13:43:57.608348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.339 [2024-10-01 13:43:57.608381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.339 [2024-10-01 13:43:57.608398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.339 [2024-10-01 13:43:57.608413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.339 [2024-10-01 13:43:57.608445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.339 [2024-10-01 13:43:57.615115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.339 [2024-10-01 13:43:57.615238] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.339 [2024-10-01 13:43:57.615272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.339 [2024-10-01 13:43:57.615291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.339 [2024-10-01 13:43:57.615325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.339 [2024-10-01 13:43:57.615378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.339 [2024-10-01 13:43:57.615398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.339 [2024-10-01 13:43:57.615412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.339 [2024-10-01 13:43:57.615445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.339 [2024-10-01 13:43:57.618405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.339 [2024-10-01 13:43:57.618528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.339 [2024-10-01 13:43:57.618603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.339 [2024-10-01 13:43:57.618624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.339 [2024-10-01 13:43:57.618659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.339 [2024-10-01 13:43:57.618692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.339 [2024-10-01 13:43:57.618709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.339 [2024-10-01 13:43:57.618724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.339 [2024-10-01 13:43:57.619653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.339 [2024-10-01 13:43:57.625386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.339 [2024-10-01 13:43:57.625509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.339 [2024-10-01 13:43:57.625563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.339 [2024-10-01 13:43:57.625598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.339 [2024-10-01 13:43:57.625639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.339 [2024-10-01 13:43:57.625672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.339 [2024-10-01 13:43:57.625690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.339 [2024-10-01 13:43:57.625704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.339 [2024-10-01 13:43:57.625736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.339 [2024-10-01 13:43:57.629449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.339 [2024-10-01 13:43:57.629611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.339 [2024-10-01 13:43:57.629647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.339 [2024-10-01 13:43:57.629667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.339 [2024-10-01 13:43:57.629705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.339 [2024-10-01 13:43:57.629739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.339 [2024-10-01 13:43:57.629758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.339 [2024-10-01 13:43:57.629772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.339 [2024-10-01 13:43:57.629827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.339 [2024-10-01 13:43:57.636757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.339 [2024-10-01 13:43:57.636919] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.339 [2024-10-01 13:43:57.636953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.339 [2024-10-01 13:43:57.636972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.339 [2024-10-01 13:43:57.637007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.339 [2024-10-01 13:43:57.637040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.339 [2024-10-01 13:43:57.637058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.339 [2024-10-01 13:43:57.637073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.339 [2024-10-01 13:43:57.637105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.339 [2024-10-01 13:43:57.639755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.339 [2024-10-01 13:43:57.639888] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.339 [2024-10-01 13:43:57.639922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.339 [2024-10-01 13:43:57.639940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.339 [2024-10-01 13:43:57.639975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.340 [2024-10-01 13:43:57.640008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.340 [2024-10-01 13:43:57.640026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.340 [2024-10-01 13:43:57.640041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.340 [2024-10-01 13:43:57.640072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.340 [2024-10-01 13:43:57.647079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.340 [2024-10-01 13:43:57.647202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.340 [2024-10-01 13:43:57.647235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.340 [2024-10-01 13:43:57.647254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.340 [2024-10-01 13:43:57.647288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.340 [2024-10-01 13:43:57.648237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.340 [2024-10-01 13:43:57.648278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.340 [2024-10-01 13:43:57.648297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.340 [2024-10-01 13:43:57.648499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.340 [2024-10-01 13:43:57.651074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.340 [2024-10-01 13:43:57.651195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.340 [2024-10-01 13:43:57.651234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.340 [2024-10-01 13:43:57.651281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.340 [2024-10-01 13:43:57.651318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.340 [2024-10-01 13:43:57.651351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.340 [2024-10-01 13:43:57.651369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.340 [2024-10-01 13:43:57.651383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.340 [2024-10-01 13:43:57.651416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.340 [2024-10-01 13:43:57.658084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.340 [2024-10-01 13:43:57.658209] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.340 [2024-10-01 13:43:57.658250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.340 [2024-10-01 13:43:57.658269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.340 [2024-10-01 13:43:57.658304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.340 [2024-10-01 13:43:57.658336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.340 [2024-10-01 13:43:57.658354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.340 [2024-10-01 13:43:57.658368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.340 [2024-10-01 13:43:57.658400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.340 [2024-10-01 13:43:57.661405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.340 [2024-10-01 13:43:57.661521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.340 [2024-10-01 13:43:57.661580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.340 [2024-10-01 13:43:57.661608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.340 [2024-10-01 13:43:57.662526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.340 [2024-10-01 13:43:57.662763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.340 [2024-10-01 13:43:57.662793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.340 [2024-10-01 13:43:57.662809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.340 [2024-10-01 13:43:57.662888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.340 8349.60 IOPS, 32.62 MiB/s [2024-10-01 13:43:57.669019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.340 [2024-10-01 13:43:57.669142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.340 [2024-10-01 13:43:57.669175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.341 [2024-10-01 13:43:57.669193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.341 [2024-10-01 13:43:57.669227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.341 [2024-10-01 13:43:57.669260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.341 [2024-10-01 13:43:57.669294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.341 [2024-10-01 13:43:57.669310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.341 [2024-10-01 13:43:57.669343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.341 [2024-10-01 13:43:57.672328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.341 [2024-10-01 13:43:57.672457] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.341 [2024-10-01 13:43:57.672490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.341 [2024-10-01 13:43:57.672508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.341 [2024-10-01 13:43:57.672555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.341 [2024-10-01 13:43:57.672601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.341 [2024-10-01 13:43:57.672631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.341 [2024-10-01 13:43:57.672647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.341 [2024-10-01 13:43:57.672681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.341 [2024-10-01 13:43:57.679703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.341 [2024-10-01 13:43:57.679823] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.341 [2024-10-01 13:43:57.679856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.341 [2024-10-01 13:43:57.679888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.341 [2024-10-01 13:43:57.679926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.341 [2024-10-01 13:43:57.679959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.341 [2024-10-01 13:43:57.679976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.341 [2024-10-01 13:43:57.679991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.341 [2024-10-01 13:43:57.680023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.341 [2024-10-01 13:43:57.682678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.341 [2024-10-01 13:43:57.682799] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.341 [2024-10-01 13:43:57.682832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.341 [2024-10-01 13:43:57.682850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.341 [2024-10-01 13:43:57.682884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.341 [2024-10-01 13:43:57.682917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.341 [2024-10-01 13:43:57.682935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.341 [2024-10-01 13:43:57.682950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.341 [2024-10-01 13:43:57.682981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.341 [2024-10-01 13:43:57.689990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.341 [2024-10-01 13:43:57.690112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.341 [2024-10-01 13:43:57.690145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.341 [2024-10-01 13:43:57.690163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.341 [2024-10-01 13:43:57.690197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.341 [2024-10-01 13:43:57.691128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.341 [2024-10-01 13:43:57.691168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.341 [2024-10-01 13:43:57.691187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.341 [2024-10-01 13:43:57.691400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.341 [2024-10-01 13:43:57.694038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.341 [2024-10-01 13:43:57.694169] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.341 [2024-10-01 13:43:57.694203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.341 [2024-10-01 13:43:57.694222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.341 [2024-10-01 13:43:57.694256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.341 [2024-10-01 13:43:57.694289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.341 [2024-10-01 13:43:57.694307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.341 [2024-10-01 13:43:57.694321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.341 [2024-10-01 13:43:57.694353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.341 [2024-10-01 13:43:57.701145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.341 [2024-10-01 13:43:57.701267] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.341 [2024-10-01 13:43:57.701301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.341 [2024-10-01 13:43:57.701319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.341 [2024-10-01 13:43:57.701353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.341 [2024-10-01 13:43:57.701386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.341 [2024-10-01 13:43:57.701403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.341 [2024-10-01 13:43:57.701417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.341 [2024-10-01 13:43:57.701450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.341 [2024-10-01 13:43:57.704414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.341 [2024-10-01 13:43:57.704531] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.342 [2024-10-01 13:43:57.704591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.342 [2024-10-01 13:43:57.704623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.342 [2024-10-01 13:43:57.704693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.342 [2024-10-01 13:43:57.705633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.342 [2024-10-01 13:43:57.705672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.342 [2024-10-01 13:43:57.705690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.342 [2024-10-01 13:43:57.705895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.342 [2024-10-01 13:43:57.711487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.342 [2024-10-01 13:43:57.711623] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.342 [2024-10-01 13:43:57.711656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.342 [2024-10-01 13:43:57.711675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.342 [2024-10-01 13:43:57.711709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.342 [2024-10-01 13:43:57.711741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.342 [2024-10-01 13:43:57.711759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.342 [2024-10-01 13:43:57.711773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.342 [2024-10-01 13:43:57.711806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.342 [2024-10-01 13:43:57.715482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.342 [2024-10-01 13:43:57.715617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.342 [2024-10-01 13:43:57.715650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.342 [2024-10-01 13:43:57.715669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.342 [2024-10-01 13:43:57.715703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.342 [2024-10-01 13:43:57.715736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.342 [2024-10-01 13:43:57.715759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.342 [2024-10-01 13:43:57.715773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.342 [2024-10-01 13:43:57.715806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.342 [2024-10-01 13:43:57.721983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.342 [2024-10-01 13:43:57.722844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.342 [2024-10-01 13:43:57.722891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.342 [2024-10-01 13:43:57.722913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.342 [2024-10-01 13:43:57.723091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.342 [2024-10-01 13:43:57.723140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.342 [2024-10-01 13:43:57.723160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.342 [2024-10-01 13:43:57.723191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.342 [2024-10-01 13:43:57.723228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.342 [2024-10-01 13:43:57.725972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.342 [2024-10-01 13:43:57.726092] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.342 [2024-10-01 13:43:57.726125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.342 [2024-10-01 13:43:57.726144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.342 [2024-10-01 13:43:57.726177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.342 [2024-10-01 13:43:57.726210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.342 [2024-10-01 13:43:57.726228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.342 [2024-10-01 13:43:57.726242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.342 [2024-10-01 13:43:57.726274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.342 [2024-10-01 13:43:57.733435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.342 [2024-10-01 13:43:57.733594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.342 [2024-10-01 13:43:57.733632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.342 [2024-10-01 13:43:57.733652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.342 [2024-10-01 13:43:57.734604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.342 [2024-10-01 13:43:57.734848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.342 [2024-10-01 13:43:57.734886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.342 [2024-10-01 13:43:57.734904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.342 [2024-10-01 13:43:57.734987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.342 [2024-10-01 13:43:57.737382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.342 [2024-10-01 13:43:57.737527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.342 [2024-10-01 13:43:57.737591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.342 [2024-10-01 13:43:57.737624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.342 [2024-10-01 13:43:57.737710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.342 [2024-10-01 13:43:57.737768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.343 [2024-10-01 13:43:57.737799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.343 [2024-10-01 13:43:57.737823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.343 [2024-10-01 13:43:57.737869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.343 [2024-10-01 13:43:57.744630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.343 [2024-10-01 13:43:57.744791] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.343 [2024-10-01 13:43:57.744828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.343 [2024-10-01 13:43:57.744848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.343 [2024-10-01 13:43:57.744883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.343 [2024-10-01 13:43:57.744916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.343 [2024-10-01 13:43:57.744933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.343 [2024-10-01 13:43:57.744948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.343 [2024-10-01 13:43:57.744981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.343 [2024-10-01 13:43:57.747959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.343 [2024-10-01 13:43:57.748081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.343 [2024-10-01 13:43:57.748115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.343 [2024-10-01 13:43:57.748133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.343 [2024-10-01 13:43:57.749080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.343 [2024-10-01 13:43:57.749312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.343 [2024-10-01 13:43:57.749357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.343 [2024-10-01 13:43:57.749376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.343 [2024-10-01 13:43:57.749457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.343 [2024-10-01 13:43:57.754894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.343 [2024-10-01 13:43:57.755017] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.343 [2024-10-01 13:43:57.755050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.343 [2024-10-01 13:43:57.755068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.343 [2024-10-01 13:43:57.755102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.343 [2024-10-01 13:43:57.755135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.343 [2024-10-01 13:43:57.755152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.343 [2024-10-01 13:43:57.755169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.343 [2024-10-01 13:43:57.755201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.343 [2024-10-01 13:43:57.758949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.343 [2024-10-01 13:43:57.759072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.343 [2024-10-01 13:43:57.759106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.343 [2024-10-01 13:43:57.759124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.343 [2024-10-01 13:43:57.759158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.343 [2024-10-01 13:43:57.759209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.343 [2024-10-01 13:43:57.759228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.343 [2024-10-01 13:43:57.759243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.343 [2024-10-01 13:43:57.759276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.343 [2024-10-01 13:43:57.766171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.343 [2024-10-01 13:43:57.766309] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.343 [2024-10-01 13:43:57.766343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.343 [2024-10-01 13:43:57.766362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.343 [2024-10-01 13:43:57.766413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.343 [2024-10-01 13:43:57.766451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.343 [2024-10-01 13:43:57.766470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.343 [2024-10-01 13:43:57.766484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.343 [2024-10-01 13:43:57.766517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.343 [2024-10-01 13:43:57.769197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.343 [2024-10-01 13:43:57.769326] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.343 [2024-10-01 13:43:57.769359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.343 [2024-10-01 13:43:57.769378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.343 [2024-10-01 13:43:57.769412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.343 [2024-10-01 13:43:57.769445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.343 [2024-10-01 13:43:57.769463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.343 [2024-10-01 13:43:57.769478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.343 [2024-10-01 13:43:57.769510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.343 [2024-10-01 13:43:57.776595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.343 [2024-10-01 13:43:57.776723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.344 [2024-10-01 13:43:57.776757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.344 [2024-10-01 13:43:57.776776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.344 [2024-10-01 13:43:57.776810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.344 [2024-10-01 13:43:57.777758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.344 [2024-10-01 13:43:57.777800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.344 [2024-10-01 13:43:57.777818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.344 [2024-10-01 13:43:57.778060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.344 [2024-10-01 13:43:57.780572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.344 [2024-10-01 13:43:57.780701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.344 [2024-10-01 13:43:57.780735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.344 [2024-10-01 13:43:57.780754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.344 [2024-10-01 13:43:57.780788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.344 [2024-10-01 13:43:57.780820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.344 [2024-10-01 13:43:57.780838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.344 [2024-10-01 13:43:57.780853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.344 [2024-10-01 13:43:57.780885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.344 [2024-10-01 13:43:57.787495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.344 [2024-10-01 13:43:57.787630] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.344 [2024-10-01 13:43:57.787665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.344 [2024-10-01 13:43:57.787684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.344 [2024-10-01 13:43:57.787735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.344 [2024-10-01 13:43:57.787774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.344 [2024-10-01 13:43:57.787793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.344 [2024-10-01 13:43:57.787807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.344 [2024-10-01 13:43:57.787839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.344 [2024-10-01 13:43:57.790825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.344 [2024-10-01 13:43:57.790946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.344 [2024-10-01 13:43:57.790978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.344 [2024-10-01 13:43:57.790997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.344 [2024-10-01 13:43:57.791031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.344 [2024-10-01 13:43:57.791976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.344 [2024-10-01 13:43:57.792016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.344 [2024-10-01 13:43:57.792035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.344 [2024-10-01 13:43:57.792259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.344 [2024-10-01 13:43:57.797777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.344 [2024-10-01 13:43:57.797903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.344 [2024-10-01 13:43:57.797936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.344 [2024-10-01 13:43:57.797977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.344 [2024-10-01 13:43:57.798013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.344 [2024-10-01 13:43:57.798046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.344 [2024-10-01 13:43:57.798064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.344 [2024-10-01 13:43:57.798079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.344 [2024-10-01 13:43:57.798111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.344 [2024-10-01 13:43:57.801782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.344 [2024-10-01 13:43:57.801904] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.344 [2024-10-01 13:43:57.801937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.344 [2024-10-01 13:43:57.801955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.344 [2024-10-01 13:43:57.801990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.344 [2024-10-01 13:43:57.802022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.344 [2024-10-01 13:43:57.802041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.344 [2024-10-01 13:43:57.802055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.344 [2024-10-01 13:43:57.802087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.344 [2024-10-01 13:43:57.809648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.344 [2024-10-01 13:43:57.809947] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.344 [2024-10-01 13:43:57.809993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.344 [2024-10-01 13:43:57.810014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.344 [2024-10-01 13:43:57.810058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.344 [2024-10-01 13:43:57.810093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.344 [2024-10-01 13:43:57.810111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.345 [2024-10-01 13:43:57.810126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.345 [2024-10-01 13:43:57.810159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.345 [2024-10-01 13:43:57.812882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.345 [2024-10-01 13:43:57.813002] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.345 [2024-10-01 13:43:57.813044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.345 [2024-10-01 13:43:57.813065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.345 [2024-10-01 13:43:57.813099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.345 [2024-10-01 13:43:57.813132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.345 [2024-10-01 13:43:57.813169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.345 [2024-10-01 13:43:57.813185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.345 [2024-10-01 13:43:57.813219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.345 [2024-10-01 13:43:57.820885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.345 [2024-10-01 13:43:57.821110] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.345 [2024-10-01 13:43:57.821150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.345 [2024-10-01 13:43:57.821169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.345 [2024-10-01 13:43:57.822148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.345 [2024-10-01 13:43:57.822426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.345 [2024-10-01 13:43:57.822466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.345 [2024-10-01 13:43:57.822487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.345 [2024-10-01 13:43:57.822590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.345 [2024-10-01 13:43:57.824983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.345 [2024-10-01 13:43:57.825152] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.345 [2024-10-01 13:43:57.825188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.345 [2024-10-01 13:43:57.825219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.345 [2024-10-01 13:43:57.825256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.345 [2024-10-01 13:43:57.825305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.345 [2024-10-01 13:43:57.825325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.345 [2024-10-01 13:43:57.825341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.345 [2024-10-01 13:43:57.825375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.345 [2024-10-01 13:43:57.832249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.345 [2024-10-01 13:43:57.832424] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.345 [2024-10-01 13:43:57.832461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.345 [2024-10-01 13:43:57.832481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.345 [2024-10-01 13:43:57.832517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.345 [2024-10-01 13:43:57.832567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.345 [2024-10-01 13:43:57.832587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.345 [2024-10-01 13:43:57.832603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.345 [2024-10-01 13:43:57.832637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.345 [2024-10-01 13:43:57.835530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.345 [2024-10-01 13:43:57.835683] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.345 [2024-10-01 13:43:57.835717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.345 [2024-10-01 13:43:57.835735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.345 [2024-10-01 13:43:57.835769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.345 [2024-10-01 13:43:57.836716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.345 [2024-10-01 13:43:57.836757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.345 [2024-10-01 13:43:57.836775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.345 [2024-10-01 13:43:57.836993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.345 [2024-10-01 13:43:57.842515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.345 [2024-10-01 13:43:57.842657] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.345 [2024-10-01 13:43:57.842701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.345 [2024-10-01 13:43:57.842723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.345 [2024-10-01 13:43:57.842757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.345 [2024-10-01 13:43:57.842790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.345 [2024-10-01 13:43:57.842808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.345 [2024-10-01 13:43:57.842822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.345 [2024-10-01 13:43:57.842855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.345 [2024-10-01 13:43:57.846589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.345 [2024-10-01 13:43:57.846710] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.345 [2024-10-01 13:43:57.846743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.345 [2024-10-01 13:43:57.846761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.346 [2024-10-01 13:43:57.846794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.346 [2024-10-01 13:43:57.846837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.346 [2024-10-01 13:43:57.846855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.346 [2024-10-01 13:43:57.846870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.346 [2024-10-01 13:43:57.846902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.346 [2024-10-01 13:43:57.853025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.346 [2024-10-01 13:43:57.853896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.346 [2024-10-01 13:43:57.853943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.346 [2024-10-01 13:43:57.853965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.346 [2024-10-01 13:43:57.854173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.346 [2024-10-01 13:43:57.854234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.346 [2024-10-01 13:43:57.854256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.346 [2024-10-01 13:43:57.854271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.346 [2024-10-01 13:43:57.854306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.346 [2024-10-01 13:43:57.856942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.346 [2024-10-01 13:43:57.857062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.346 [2024-10-01 13:43:57.857095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.346 [2024-10-01 13:43:57.857113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.346 [2024-10-01 13:43:57.857147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.346 [2024-10-01 13:43:57.857180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.346 [2024-10-01 13:43:57.857197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.346 [2024-10-01 13:43:57.857212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.346 [2024-10-01 13:43:57.857243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.346 [2024-10-01 13:43:57.864233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.346 [2024-10-01 13:43:57.864364] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.346 [2024-10-01 13:43:57.864397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.346 [2024-10-01 13:43:57.864415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.346 [2024-10-01 13:43:57.864465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.346 [2024-10-01 13:43:57.864502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.346 [2024-10-01 13:43:57.864520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.346 [2024-10-01 13:43:57.864549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.346 [2024-10-01 13:43:57.865467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.346 [2024-10-01 13:43:57.868119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.346 [2024-10-01 13:43:57.868382] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.346 [2024-10-01 13:43:57.868427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.346 [2024-10-01 13:43:57.868448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.346 [2024-10-01 13:43:57.868508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.346 [2024-10-01 13:43:57.868562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.346 [2024-10-01 13:43:57.868584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.346 [2024-10-01 13:43:57.868621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.346 [2024-10-01 13:43:57.868657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.346 [2024-10-01 13:43:57.875278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.346 [2024-10-01 13:43:57.875402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.346 [2024-10-01 13:43:57.875436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.346 [2024-10-01 13:43:57.875455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.346 [2024-10-01 13:43:57.875489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.346 [2024-10-01 13:43:57.875521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.346 [2024-10-01 13:43:57.875554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.346 [2024-10-01 13:43:57.875572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.346 [2024-10-01 13:43:57.875606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.346 [2024-10-01 13:43:57.878599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.346 [2024-10-01 13:43:57.878720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.346 [2024-10-01 13:43:57.878753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.346 [2024-10-01 13:43:57.878771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.346 [2024-10-01 13:43:57.878804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.346 [2024-10-01 13:43:57.879738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.346 [2024-10-01 13:43:57.879777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.346 [2024-10-01 13:43:57.879795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.346 [2024-10-01 13:43:57.880011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.346 [2024-10-01 13:43:57.885514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.347 [2024-10-01 13:43:57.885665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.347 [2024-10-01 13:43:57.885700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.347 [2024-10-01 13:43:57.885719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.347 [2024-10-01 13:43:57.885752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.347 [2024-10-01 13:43:57.885785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.347 [2024-10-01 13:43:57.885802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.347 [2024-10-01 13:43:57.885817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.347 [2024-10-01 13:43:57.885850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.347 [2024-10-01 13:43:57.889626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.347 [2024-10-01 13:43:57.889771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.347 [2024-10-01 13:43:57.889805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.347 [2024-10-01 13:43:57.889824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.347 [2024-10-01 13:43:57.889858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.347 [2024-10-01 13:43:57.889891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.347 [2024-10-01 13:43:57.889909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.347 [2024-10-01 13:43:57.889924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.347 [2024-10-01 13:43:57.889956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.347 [2024-10-01 13:43:57.896913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.347 [2024-10-01 13:43:57.897045] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.347 [2024-10-01 13:43:57.897079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.347 [2024-10-01 13:43:57.897097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.347 [2024-10-01 13:43:57.897131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.347 [2024-10-01 13:43:57.897164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.347 [2024-10-01 13:43:57.897182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.347 [2024-10-01 13:43:57.897196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.347 [2024-10-01 13:43:57.897228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.347 [2024-10-01 13:43:57.899938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.347 [2024-10-01 13:43:57.900062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.347 [2024-10-01 13:43:57.900095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.347 [2024-10-01 13:43:57.900114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.347 [2024-10-01 13:43:57.900147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.347 [2024-10-01 13:43:57.900181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.347 [2024-10-01 13:43:57.900199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.347 [2024-10-01 13:43:57.900214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.347 [2024-10-01 13:43:57.900246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.347 [2024-10-01 13:43:57.907394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.347 [2024-10-01 13:43:57.907531] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.347 [2024-10-01 13:43:57.907603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.347 [2024-10-01 13:43:57.907624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.347 [2024-10-01 13:43:57.907661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.347 [2024-10-01 13:43:57.908646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.347 [2024-10-01 13:43:57.908690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.348 [2024-10-01 13:43:57.908709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.348 [2024-10-01 13:43:57.908943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.348 [2024-10-01 13:43:57.911420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.348 [2024-10-01 13:43:57.911586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.348 [2024-10-01 13:43:57.911630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.348 [2024-10-01 13:43:57.911651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.348 [2024-10-01 13:43:57.911700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.348 [2024-10-01 13:43:57.911734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.348 [2024-10-01 13:43:57.911752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.348 [2024-10-01 13:43:57.911766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.348 [2024-10-01 13:43:57.911799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.348 [2024-10-01 13:43:57.918462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.348 [2024-10-01 13:43:57.918631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.348 [2024-10-01 13:43:57.918668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.348 [2024-10-01 13:43:57.918687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.348 [2024-10-01 13:43:57.918723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.348 [2024-10-01 13:43:57.918757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.348 [2024-10-01 13:43:57.918775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.348 [2024-10-01 13:43:57.918790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.348 [2024-10-01 13:43:57.918823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.348 [2024-10-01 13:43:57.921767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.348 [2024-10-01 13:43:57.921901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.348 [2024-10-01 13:43:57.921934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.348 [2024-10-01 13:43:57.921953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.348 [2024-10-01 13:43:57.921987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.348 [2024-10-01 13:43:57.922958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.348 [2024-10-01 13:43:57.923001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.348 [2024-10-01 13:43:57.923020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.348 [2024-10-01 13:43:57.923256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.348 [2024-10-01 13:43:57.928860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.348 [2024-10-01 13:43:57.929037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.348 [2024-10-01 13:43:57.929077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.348 [2024-10-01 13:43:57.929096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.348 [2024-10-01 13:43:57.929133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.348 [2024-10-01 13:43:57.929167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.348 [2024-10-01 13:43:57.929185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.348 [2024-10-01 13:43:57.929200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.348 [2024-10-01 13:43:57.929233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.348 [2024-10-01 13:43:57.932918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.348 [2024-10-01 13:43:57.933062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.348 [2024-10-01 13:43:57.933096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.348 [2024-10-01 13:43:57.933115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.348 [2024-10-01 13:43:57.933149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.348 [2024-10-01 13:43:57.933182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.348 [2024-10-01 13:43:57.933201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.348 [2024-10-01 13:43:57.933216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.348 [2024-10-01 13:43:57.933248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.348 [2024-10-01 13:43:57.940324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.348 [2024-10-01 13:43:57.940499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.348 [2024-10-01 13:43:57.940552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.348 [2024-10-01 13:43:57.940576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.348 [2024-10-01 13:43:57.940614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.348 [2024-10-01 13:43:57.940647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.348 [2024-10-01 13:43:57.940666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.348 [2024-10-01 13:43:57.940681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.348 [2024-10-01 13:43:57.940714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.348 [2024-10-01 13:43:57.943326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.348 [2024-10-01 13:43:57.943455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.348 [2024-10-01 13:43:57.943489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.348 [2024-10-01 13:43:57.943551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.348 [2024-10-01 13:43:57.943593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.348 [2024-10-01 13:43:57.943626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.348 [2024-10-01 13:43:57.943645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.348 [2024-10-01 13:43:57.943660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.349 [2024-10-01 13:43:57.943692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.349 [2024-10-01 13:43:57.950650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.349 [2024-10-01 13:43:57.950781] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.349 [2024-10-01 13:43:57.950821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.349 [2024-10-01 13:43:57.950848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.349 [2024-10-01 13:43:57.951822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.349 [2024-10-01 13:43:57.952089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.349 [2024-10-01 13:43:57.952130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.349 [2024-10-01 13:43:57.952149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.349 [2024-10-01 13:43:57.952232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.349 [2024-10-01 13:43:57.953861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.349 [2024-10-01 13:43:57.953999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.349 [2024-10-01 13:43:57.954034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.349 [2024-10-01 13:43:57.954053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.349 [2024-10-01 13:43:57.954091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.349 [2024-10-01 13:43:57.954139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.349 [2024-10-01 13:43:57.954170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.349 [2024-10-01 13:43:57.954187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.349 [2024-10-01 13:43:57.954993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.349 [2024-10-01 13:43:57.961227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.349 [2024-10-01 13:43:57.961384] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.349 [2024-10-01 13:43:57.961419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.349 [2024-10-01 13:43:57.961439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.349 [2024-10-01 13:43:57.961473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.349 [2024-10-01 13:43:57.961506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.349 [2024-10-01 13:43:57.961571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.349 [2024-10-01 13:43:57.961589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.349 [2024-10-01 13:43:57.962515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.349 [2024-10-01 13:43:57.964194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.349 [2024-10-01 13:43:57.964322] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.349 [2024-10-01 13:43:57.964355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.349 [2024-10-01 13:43:57.964374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.349 [2024-10-01 13:43:57.964408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.349 [2024-10-01 13:43:57.964442] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.349 [2024-10-01 13:43:57.964460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.349 [2024-10-01 13:43:57.964474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.349 [2024-10-01 13:43:57.964506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.349 [2024-10-01 13:43:57.971335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.349 [2024-10-01 13:43:57.972669] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.349 [2024-10-01 13:43:57.972716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.349 [2024-10-01 13:43:57.972737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.349 [2024-10-01 13:43:57.973613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.349 [2024-10-01 13:43:57.973762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.349 [2024-10-01 13:43:57.973789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.349 [2024-10-01 13:43:57.973805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.349 [2024-10-01 13:43:57.973839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.349 [2024-10-01 13:43:57.974290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.349 [2024-10-01 13:43:57.974397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.349 [2024-10-01 13:43:57.974429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.349 [2024-10-01 13:43:57.974447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.349 [2024-10-01 13:43:57.974480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.349 [2024-10-01 13:43:57.974512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.349 [2024-10-01 13:43:57.974530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.349 [2024-10-01 13:43:57.974575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.349 [2024-10-01 13:43:57.974621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.350 [2024-10-01 13:43:57.982252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.350 [2024-10-01 13:43:57.982378] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.350 [2024-10-01 13:43:57.982414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.350 [2024-10-01 13:43:57.982432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.350 [2024-10-01 13:43:57.982466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.350 [2024-10-01 13:43:57.982499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.350 [2024-10-01 13:43:57.982516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.350 [2024-10-01 13:43:57.982531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.350 [2024-10-01 13:43:57.983800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.350 [2024-10-01 13:43:57.984977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.350 [2024-10-01 13:43:57.985094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.350 [2024-10-01 13:43:57.985127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.350 [2024-10-01 13:43:57.985145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.350 [2024-10-01 13:43:57.985179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.350 [2024-10-01 13:43:57.985214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.350 [2024-10-01 13:43:57.985233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.350 [2024-10-01 13:43:57.985247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.350 [2024-10-01 13:43:57.985279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.350 [2024-10-01 13:43:57.992796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.350 [2024-10-01 13:43:57.992935] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.350 [2024-10-01 13:43:57.992970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.350 [2024-10-01 13:43:57.992989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.350 [2024-10-01 13:43:57.993023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.350 [2024-10-01 13:43:57.993076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.350 [2024-10-01 13:43:57.993099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.350 [2024-10-01 13:43:57.993114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.350 [2024-10-01 13:43:57.993148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.350 [2024-10-01 13:43:57.995071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.350 [2024-10-01 13:43:57.995191] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.350 [2024-10-01 13:43:57.995225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.350 [2024-10-01 13:43:57.995243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.350 [2024-10-01 13:43:57.996214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.350 [2024-10-01 13:43:57.996433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.350 [2024-10-01 13:43:57.996461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.350 [2024-10-01 13:43:57.996477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.350 [2024-10-01 13:43:57.996571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.350 [2024-10-01 13:43:58.002903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.350 [2024-10-01 13:43:58.003049] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.350 [2024-10-01 13:43:58.003084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.350 [2024-10-01 13:43:58.003102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.350 [2024-10-01 13:43:58.003136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.350 [2024-10-01 13:43:58.003169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.350 [2024-10-01 13:43:58.003186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.350 [2024-10-01 13:43:58.003201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.350 [2024-10-01 13:43:58.003232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.350 [2024-10-01 13:43:58.006684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.350 [2024-10-01 13:43:58.006807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.350 [2024-10-01 13:43:58.006841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.350 [2024-10-01 13:43:58.006859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.350 [2024-10-01 13:43:58.006893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.350 [2024-10-01 13:43:58.006925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.350 [2024-10-01 13:43:58.006943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.350 [2024-10-01 13:43:58.006957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.350 [2024-10-01 13:43:58.006990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.350 [2024-10-01 13:43:58.014955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.350 [2024-10-01 13:43:58.015088] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.350 [2024-10-01 13:43:58.015122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.350 [2024-10-01 13:43:58.015140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.350 [2024-10-01 13:43:58.015175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.350 [2024-10-01 13:43:58.015208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.350 [2024-10-01 13:43:58.015226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.351 [2024-10-01 13:43:58.015261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.351 [2024-10-01 13:43:58.015296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.351 [2024-10-01 13:43:58.017983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.351 [2024-10-01 13:43:58.018107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.351 [2024-10-01 13:43:58.018140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.351 [2024-10-01 13:43:58.018159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.351 [2024-10-01 13:43:58.018193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.351 [2024-10-01 13:43:58.018225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.351 [2024-10-01 13:43:58.018243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.351 [2024-10-01 13:43:58.018258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.351 [2024-10-01 13:43:58.018290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.351 [2024-10-01 13:43:58.025788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.351 [2024-10-01 13:43:58.025913] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.351 [2024-10-01 13:43:58.025947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.351 [2024-10-01 13:43:58.025966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.351 [2024-10-01 13:43:58.025999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.351 [2024-10-01 13:43:58.026032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.351 [2024-10-01 13:43:58.026050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.351 [2024-10-01 13:43:58.026065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.351 [2024-10-01 13:43:58.027012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.351 [2024-10-01 13:43:58.029635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.351 [2024-10-01 13:43:58.029911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.351 [2024-10-01 13:43:58.029956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.351 [2024-10-01 13:43:58.029977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.351 [2024-10-01 13:43:58.030021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.351 [2024-10-01 13:43:58.030057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.351 [2024-10-01 13:43:58.030076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.351 [2024-10-01 13:43:58.030091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.351 [2024-10-01 13:43:58.030123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.351 [2024-10-01 13:43:58.036882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.351 [2024-10-01 13:43:58.037028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.351 [2024-10-01 13:43:58.037062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.351 [2024-10-01 13:43:58.037081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.351 [2024-10-01 13:43:58.037115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.351 [2024-10-01 13:43:58.037147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.351 [2024-10-01 13:43:58.037166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.351 [2024-10-01 13:43:58.037180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.351 [2024-10-01 13:43:58.037212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.351 [2024-10-01 13:43:58.040237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.351 [2024-10-01 13:43:58.040357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.351 [2024-10-01 13:43:58.040389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.351 [2024-10-01 13:43:58.040408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.351 [2024-10-01 13:43:58.040441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.351 [2024-10-01 13:43:58.041395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.351 [2024-10-01 13:43:58.041437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.351 [2024-10-01 13:43:58.041456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.351 [2024-10-01 13:43:58.041676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.351 [2024-10-01 13:43:58.047275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.351 [2024-10-01 13:43:58.047400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.351 [2024-10-01 13:43:58.047433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.351 [2024-10-01 13:43:58.047451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.351 [2024-10-01 13:43:58.047485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.351 [2024-10-01 13:43:58.047518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.351 [2024-10-01 13:43:58.047552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.351 [2024-10-01 13:43:58.047570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.351 [2024-10-01 13:43:58.047604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.351 [2024-10-01 13:43:58.051286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.351 [2024-10-01 13:43:58.051408] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.351 [2024-10-01 13:43:58.051441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.351 [2024-10-01 13:43:58.051460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.352 [2024-10-01 13:43:58.051494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.352 [2024-10-01 13:43:58.051573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.352 [2024-10-01 13:43:58.051595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.352 [2024-10-01 13:43:58.051610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.352 [2024-10-01 13:43:58.051642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.352 [2024-10-01 13:43:58.058514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.352 [2024-10-01 13:43:58.058678] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.352 [2024-10-01 13:43:58.058712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.352 [2024-10-01 13:43:58.058731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.352 [2024-10-01 13:43:58.058765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.352 [2024-10-01 13:43:58.058798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.352 [2024-10-01 13:43:58.058816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.352 [2024-10-01 13:43:58.058830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.352 [2024-10-01 13:43:58.058863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.352 [2024-10-01 13:43:58.061516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.352 [2024-10-01 13:43:58.061648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.352 [2024-10-01 13:43:58.061680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.352 [2024-10-01 13:43:58.061699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.352 [2024-10-01 13:43:58.061732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.352 [2024-10-01 13:43:58.061765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.352 [2024-10-01 13:43:58.061783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.352 [2024-10-01 13:43:58.061797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.352 [2024-10-01 13:43:58.061829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.352 [2024-10-01 13:43:58.068724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.352 [2024-10-01 13:43:58.068847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.352 [2024-10-01 13:43:58.068880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.352 [2024-10-01 13:43:58.068899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.352 [2024-10-01 13:43:58.068934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.352 [2024-10-01 13:43:58.069862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.352 [2024-10-01 13:43:58.069902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.352 [2024-10-01 13:43:58.069920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.352 [2024-10-01 13:43:58.070177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.352 [2024-10-01 13:43:58.072702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.352 [2024-10-01 13:43:58.072826] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.352 [2024-10-01 13:43:58.072860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.352 [2024-10-01 13:43:58.072879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.352 [2024-10-01 13:43:58.072912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.352 [2024-10-01 13:43:58.072945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.352 [2024-10-01 13:43:58.072963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.352 [2024-10-01 13:43:58.072977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.352 [2024-10-01 13:43:58.073009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.352 [2024-10-01 13:43:58.079773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.352 [2024-10-01 13:43:58.079968] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.352 [2024-10-01 13:43:58.080004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.352 [2024-10-01 13:43:58.080024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.352 [2024-10-01 13:43:58.080060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.352 [2024-10-01 13:43:58.080104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.352 [2024-10-01 13:43:58.080121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.352 [2024-10-01 13:43:58.080137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.352 [2024-10-01 13:43:58.080170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.352 [2024-10-01 13:43:58.083096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.352 [2024-10-01 13:43:58.083215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.352 [2024-10-01 13:43:58.083248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.352 [2024-10-01 13:43:58.083266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.352 [2024-10-01 13:43:58.084227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.352 [2024-10-01 13:43:58.084457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.352 [2024-10-01 13:43:58.084503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.352 [2024-10-01 13:43:58.084521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.352 [2024-10-01 13:43:58.084617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.352 [2024-10-01 13:43:58.090008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.352 [2024-10-01 13:43:58.090129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.352 [2024-10-01 13:43:58.090162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.352 [2024-10-01 13:43:58.090210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.353 [2024-10-01 13:43:58.090247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.353 [2024-10-01 13:43:58.090280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.353 [2024-10-01 13:43:58.090299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.353 [2024-10-01 13:43:58.090313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.353 [2024-10-01 13:43:58.090345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.353 [2024-10-01 13:43:58.094043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.353 [2024-10-01 13:43:58.094165] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.353 [2024-10-01 13:43:58.094209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.353 [2024-10-01 13:43:58.094228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.353 [2024-10-01 13:43:58.094262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.353 [2024-10-01 13:43:58.094295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.353 [2024-10-01 13:43:58.094313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.353 [2024-10-01 13:43:58.094327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.353 [2024-10-01 13:43:58.094359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.353 [2024-10-01 13:43:58.101288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.353 [2024-10-01 13:43:58.101420] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.353 [2024-10-01 13:43:58.101461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.353 [2024-10-01 13:43:58.101480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.353 [2024-10-01 13:43:58.101514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.353 [2024-10-01 13:43:58.101562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.353 [2024-10-01 13:43:58.101584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.353 [2024-10-01 13:43:58.101598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.353 [2024-10-01 13:43:58.101630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.353 [2024-10-01 13:43:58.104302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.353 [2024-10-01 13:43:58.104420] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.353 [2024-10-01 13:43:58.104452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.353 [2024-10-01 13:43:58.104470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.353 [2024-10-01 13:43:58.104503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.353 [2024-10-01 13:43:58.104551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.353 [2024-10-01 13:43:58.104591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.353 [2024-10-01 13:43:58.104607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.353 [2024-10-01 13:43:58.104641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.353 [2024-10-01 13:43:58.111689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.353 [2024-10-01 13:43:58.111821] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.353 [2024-10-01 13:43:58.111854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.353 [2024-10-01 13:43:58.111885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.353 [2024-10-01 13:43:58.111924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.353 [2024-10-01 13:43:58.112874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.353 [2024-10-01 13:43:58.112914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.353 [2024-10-01 13:43:58.112933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.353 [2024-10-01 13:43:58.113155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.353 [2024-10-01 13:43:58.115704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.353 [2024-10-01 13:43:58.115825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.353 [2024-10-01 13:43:58.115858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.353 [2024-10-01 13:43:58.115891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.353 [2024-10-01 13:43:58.115929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.353 [2024-10-01 13:43:58.115962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.353 [2024-10-01 13:43:58.115980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.353 [2024-10-01 13:43:58.116003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.353 [2024-10-01 13:43:58.116035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.353 [2024-10-01 13:43:58.122711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.353 [2024-10-01 13:43:58.122843] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.353 [2024-10-01 13:43:58.122878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.353 [2024-10-01 13:43:58.122897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.353 [2024-10-01 13:43:58.122931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.353 [2024-10-01 13:43:58.122963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.353 [2024-10-01 13:43:58.122981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.353 [2024-10-01 13:43:58.122995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.353 [2024-10-01 13:43:58.123027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.353 [2024-10-01 13:43:58.126108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.353 [2024-10-01 13:43:58.126256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.353 [2024-10-01 13:43:58.126290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.353 [2024-10-01 13:43:58.126309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.353 [2024-10-01 13:43:58.127265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.354 [2024-10-01 13:43:58.127499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.354 [2024-10-01 13:43:58.127550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.354 [2024-10-01 13:43:58.127572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.354 [2024-10-01 13:43:58.127655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.354 [2024-10-01 13:43:58.133024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.354 [2024-10-01 13:43:58.133147] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.354 [2024-10-01 13:43:58.133181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.354 [2024-10-01 13:43:58.133199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.354 [2024-10-01 13:43:58.133233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.354 [2024-10-01 13:43:58.133266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.354 [2024-10-01 13:43:58.133284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.354 [2024-10-01 13:43:58.133298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.354 [2024-10-01 13:43:58.133331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.354 [2024-10-01 13:43:58.137035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.354 [2024-10-01 13:43:58.137157] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.354 [2024-10-01 13:43:58.137190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.354 [2024-10-01 13:43:58.137208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.354 [2024-10-01 13:43:58.137242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.354 [2024-10-01 13:43:58.137275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.354 [2024-10-01 13:43:58.137293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.354 [2024-10-01 13:43:58.137308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.354 [2024-10-01 13:43:58.137339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.354 [2024-10-01 13:43:58.144260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.354 [2024-10-01 13:43:58.144399] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.354 [2024-10-01 13:43:58.144433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.354 [2024-10-01 13:43:58.144452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.354 [2024-10-01 13:43:58.144511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.354 [2024-10-01 13:43:58.144561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.354 [2024-10-01 13:43:58.144582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.354 [2024-10-01 13:43:58.144596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.354 [2024-10-01 13:43:58.144629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.354 [2024-10-01 13:43:58.147264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.354 [2024-10-01 13:43:58.147395] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.354 [2024-10-01 13:43:58.147428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.354 [2024-10-01 13:43:58.147447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.354 [2024-10-01 13:43:58.147480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.354 [2024-10-01 13:43:58.147513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.354 [2024-10-01 13:43:58.147531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.354 [2024-10-01 13:43:58.147566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.354 [2024-10-01 13:43:58.147599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.354 [2024-10-01 13:43:58.155513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.354 [2024-10-01 13:43:58.155671] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.354 [2024-10-01 13:43:58.155705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.354 [2024-10-01 13:43:58.155724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.354 [2024-10-01 13:43:58.155759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.354 [2024-10-01 13:43:58.156721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.354 [2024-10-01 13:43:58.156761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.354 [2024-10-01 13:43:58.156781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.354 [2024-10-01 13:43:58.156988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.354 [2024-10-01 13:43:58.157361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.354 [2024-10-01 13:43:58.157472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.355 [2024-10-01 13:43:58.157504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.355 [2024-10-01 13:43:58.157523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.355 [2024-10-01 13:43:58.158804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.355 [2024-10-01 13:43:58.159757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.355 [2024-10-01 13:43:58.159798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.355 [2024-10-01 13:43:58.159838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.355 [2024-10-01 13:43:58.160065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.355 [2024-10-01 13:43:58.166983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.355 [2024-10-01 13:43:58.167712] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.355 [2024-10-01 13:43:58.167759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.355 [2024-10-01 13:43:58.167782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.355 [2024-10-01 13:43:58.167912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.355 [2024-10-01 13:43:58.167987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.355 [2024-10-01 13:43:58.168024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.355 [2024-10-01 13:43:58.168042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.355 [2024-10-01 13:43:58.168058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.355 [2024-10-01 13:43:58.168090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.355 [2024-10-01 13:43:58.168155] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.355 [2024-10-01 13:43:58.168183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.355 [2024-10-01 13:43:58.168201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.355 [2024-10-01 13:43:58.168235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.355 [2024-10-01 13:43:58.168499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.355 [2024-10-01 13:43:58.168527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.355 [2024-10-01 13:43:58.168560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.355 [2024-10-01 13:43:58.168708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.355 [2024-10-01 13:43:58.178945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.355 [2024-10-01 13:43:58.179066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.355 [2024-10-01 13:43:58.179182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.355 [2024-10-01 13:43:58.179216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.355 [2024-10-01 13:43:58.179235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.355 [2024-10-01 13:43:58.179314] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.355 [2024-10-01 13:43:58.179342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.355 [2024-10-01 13:43:58.179359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.355 [2024-10-01 13:43:58.179381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.355 [2024-10-01 13:43:58.179414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.355 [2024-10-01 13:43:58.179467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.355 [2024-10-01 13:43:58.179483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.355 [2024-10-01 13:43:58.179499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.355 [2024-10-01 13:43:58.179550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.355 [2024-10-01 13:43:58.179575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.355 [2024-10-01 13:43:58.179590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.355 [2024-10-01 13:43:58.179604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.355 [2024-10-01 13:43:58.179636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.355 [2024-10-01 13:43:58.190677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.355 [2024-10-01 13:43:58.190775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.355 [2024-10-01 13:43:58.190945] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.355 [2024-10-01 13:43:58.191018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.355 [2024-10-01 13:43:58.191054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.355 [2024-10-01 13:43:58.191151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.355 [2024-10-01 13:43:58.191198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.355 [2024-10-01 13:43:58.191234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.355 [2024-10-01 13:43:58.191292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.355 [2024-10-01 13:43:58.191322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.356 [2024-10-01 13:43:58.191351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.356 [2024-10-01 13:43:58.191369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.356 [2024-10-01 13:43:58.191384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.356 [2024-10-01 13:43:58.191402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.356 [2024-10-01 13:43:58.191418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.356 [2024-10-01 13:43:58.191432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.356 [2024-10-01 13:43:58.192694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.356 [2024-10-01 13:43:58.192736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.356 [2024-10-01 13:43:58.201700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.356 [2024-10-01 13:43:58.201771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.356 [2024-10-01 13:43:58.201879] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.356 [2024-10-01 13:43:58.201913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.356 [2024-10-01 13:43:58.201932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.356 [2024-10-01 13:43:58.202014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.356 [2024-10-01 13:43:58.202042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.356 [2024-10-01 13:43:58.202060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.356 [2024-10-01 13:43:58.203000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.356 [2024-10-01 13:43:58.203063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.356 [2024-10-01 13:43:58.203296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.356 [2024-10-01 13:43:58.203335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.356 [2024-10-01 13:43:58.203354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.356 [2024-10-01 13:43:58.203373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.356 [2024-10-01 13:43:58.203388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.356 [2024-10-01 13:43:58.203402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.356 [2024-10-01 13:43:58.203517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.356 [2024-10-01 13:43:58.203564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.356 [2024-10-01 13:43:58.213240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.356 [2024-10-01 13:43:58.213302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.356 [2024-10-01 13:43:58.213665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.356 [2024-10-01 13:43:58.213712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.356 [2024-10-01 13:43:58.213734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.356 [2024-10-01 13:43:58.213788] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.356 [2024-10-01 13:43:58.213814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.356 [2024-10-01 13:43:58.213831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.356 [2024-10-01 13:43:58.213977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.356 [2024-10-01 13:43:58.214015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.356 [2024-10-01 13:43:58.214156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.356 [2024-10-01 13:43:58.214183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.356 [2024-10-01 13:43:58.214200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.356 [2024-10-01 13:43:58.214219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.356 [2024-10-01 13:43:58.214234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.356 [2024-10-01 13:43:58.214248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.356 [2024-10-01 13:43:58.214289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.356 [2024-10-01 13:43:58.214331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.356 [2024-10-01 13:43:58.224063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.356 [2024-10-01 13:43:58.224126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.356 [2024-10-01 13:43:58.224247] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.356 [2024-10-01 13:43:58.224281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.356 [2024-10-01 13:43:58.224299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.356 [2024-10-01 13:43:58.224350] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.356 [2024-10-01 13:43:58.224376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.356 [2024-10-01 13:43:58.224392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.356 [2024-10-01 13:43:58.224426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.356 [2024-10-01 13:43:58.224450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.356 [2024-10-01 13:43:58.224476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.356 [2024-10-01 13:43:58.224494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.356 [2024-10-01 13:43:58.224509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.356 [2024-10-01 13:43:58.224527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.356 [2024-10-01 13:43:58.224560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.356 [2024-10-01 13:43:58.224576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.356 [2024-10-01 13:43:58.224609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.356 [2024-10-01 13:43:58.224629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.356 [2024-10-01 13:43:58.236134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.356 [2024-10-01 13:43:58.236203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.356 [2024-10-01 13:43:58.236326] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.356 [2024-10-01 13:43:58.236361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.356 [2024-10-01 13:43:58.236380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.356 [2024-10-01 13:43:58.236432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.356 [2024-10-01 13:43:58.236458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.356 [2024-10-01 13:43:58.236474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.356 [2024-10-01 13:43:58.236511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.357 [2024-10-01 13:43:58.236552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.357 [2024-10-01 13:43:58.236586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.357 [2024-10-01 13:43:58.236627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.357 [2024-10-01 13:43:58.236643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.357 [2024-10-01 13:43:58.236667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.357 [2024-10-01 13:43:58.236683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.357 [2024-10-01 13:43:58.236697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.357 [2024-10-01 13:43:58.236731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.357 [2024-10-01 13:43:58.236752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.357 [2024-10-01 13:43:58.246586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.357 [2024-10-01 13:43:58.246644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.357 [2024-10-01 13:43:58.246769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.357 [2024-10-01 13:43:58.246803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.357 [2024-10-01 13:43:58.246821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.357 [2024-10-01 13:43:58.246873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.357 [2024-10-01 13:43:58.246898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.357 [2024-10-01 13:43:58.246915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.357 [2024-10-01 13:43:58.247850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.357 [2024-10-01 13:43:58.247912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.357 [2024-10-01 13:43:58.248120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.357 [2024-10-01 13:43:58.248150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.357 [2024-10-01 13:43:58.248176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.357 [2024-10-01 13:43:58.248194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.357 [2024-10-01 13:43:58.248211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.357 [2024-10-01 13:43:58.248224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.357 [2024-10-01 13:43:58.248302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.357 [2024-10-01 13:43:58.248324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.357 [2024-10-01 13:43:58.258035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.357 [2024-10-01 13:43:58.258108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.357 [2024-10-01 13:43:58.258219] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.357 [2024-10-01 13:43:58.258253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.357 [2024-10-01 13:43:58.258272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.357 [2024-10-01 13:43:58.258323] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.357 [2024-10-01 13:43:58.258377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.357 [2024-10-01 13:43:58.258398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.357 [2024-10-01 13:43:58.258432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.357 [2024-10-01 13:43:58.258456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.357 [2024-10-01 13:43:58.258483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.357 [2024-10-01 13:43:58.258501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.357 [2024-10-01 13:43:58.258516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.357 [2024-10-01 13:43:58.258547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.357 [2024-10-01 13:43:58.258566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.357 [2024-10-01 13:43:58.258580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.357 [2024-10-01 13:43:58.258614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.357 [2024-10-01 13:43:58.258635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.357 [2024-10-01 13:43:58.269150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.357 [2024-10-01 13:43:58.269215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.357 [2024-10-01 13:43:58.269344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.357 [2024-10-01 13:43:58.269378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.357 [2024-10-01 13:43:58.269397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.357 [2024-10-01 13:43:58.269448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.358 [2024-10-01 13:43:58.269474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.358 [2024-10-01 13:43:58.269490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.358 [2024-10-01 13:43:58.269525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.358 [2024-10-01 13:43:58.269566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.358 [2024-10-01 13:43:58.269597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.358 [2024-10-01 13:43:58.269616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.358 [2024-10-01 13:43:58.269631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.358 [2024-10-01 13:43:58.269649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.358 [2024-10-01 13:43:58.269664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.358 [2024-10-01 13:43:58.269678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.358 [2024-10-01 13:43:58.269710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.358 [2024-10-01 13:43:58.269730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.358 [2024-10-01 13:43:58.280821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.358 [2024-10-01 13:43:58.280890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.358 [2024-10-01 13:43:58.281026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.358 [2024-10-01 13:43:58.281060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.358 [2024-10-01 13:43:58.281079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.358 [2024-10-01 13:43:58.281130] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.358 [2024-10-01 13:43:58.281155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.358 [2024-10-01 13:43:58.281172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.358 [2024-10-01 13:43:58.281206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.358 [2024-10-01 13:43:58.281231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.358 [2024-10-01 13:43:58.281258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.358 [2024-10-01 13:43:58.281276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.358 [2024-10-01 13:43:58.281291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.358 [2024-10-01 13:43:58.281309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.358 [2024-10-01 13:43:58.281324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.358 [2024-10-01 13:43:58.281340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.358 [2024-10-01 13:43:58.281373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.358 [2024-10-01 13:43:58.281393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.358 [2024-10-01 13:43:58.291692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.358 [2024-10-01 13:43:58.291789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.358 [2024-10-01 13:43:58.292856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.358 [2024-10-01 13:43:58.292908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.358 [2024-10-01 13:43:58.292931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.358 [2024-10-01 13:43:58.292987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.358 [2024-10-01 13:43:58.293012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.358 [2024-10-01 13:43:58.293029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.358 [2024-10-01 13:43:58.293224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.358 [2024-10-01 13:43:58.293256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.358 [2024-10-01 13:43:58.293394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.358 [2024-10-01 13:43:58.293420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.358 [2024-10-01 13:43:58.293466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.358 [2024-10-01 13:43:58.293487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.358 [2024-10-01 13:43:58.293503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.358 [2024-10-01 13:43:58.293518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.358 [2024-10-01 13:43:58.294803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.358 [2024-10-01 13:43:58.294849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.358 [2024-10-01 13:43:58.303494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.358 [2024-10-01 13:43:58.303568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.358 [2024-10-01 13:43:58.303792] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.358 [2024-10-01 13:43:58.303827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.358 [2024-10-01 13:43:58.303847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.358 [2024-10-01 13:43:58.303917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.358 [2024-10-01 13:43:58.303945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.358 [2024-10-01 13:43:58.303962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.358 [2024-10-01 13:43:58.304089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.358 [2024-10-01 13:43:58.304121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.358 [2024-10-01 13:43:58.304158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.358 [2024-10-01 13:43:58.304178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.358 [2024-10-01 13:43:58.304194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.358 [2024-10-01 13:43:58.304212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.358 [2024-10-01 13:43:58.304228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.358 [2024-10-01 13:43:58.304241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.358 [2024-10-01 13:43:58.304275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.358 [2024-10-01 13:43:58.304295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.359 [2024-10-01 13:43:58.313656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.359 [2024-10-01 13:43:58.313743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.359 [2024-10-01 13:43:58.313832] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.359 [2024-10-01 13:43:58.313864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.359 [2024-10-01 13:43:58.313883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.359 [2024-10-01 13:43:58.313952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.359 [2024-10-01 13:43:58.313980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.359 [2024-10-01 13:43:58.314021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.359 [2024-10-01 13:43:58.314042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.359 [2024-10-01 13:43:58.314076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.359 [2024-10-01 13:43:58.314097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.359 [2024-10-01 13:43:58.314112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.359 [2024-10-01 13:43:58.314126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.359 [2024-10-01 13:43:58.314162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.359 [2024-10-01 13:43:58.314182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.359 [2024-10-01 13:43:58.314196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.359 [2024-10-01 13:43:58.314211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.359 [2024-10-01 13:43:58.314241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.359 [2024-10-01 13:43:58.324623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.359 [2024-10-01 13:43:58.324686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.359 [2024-10-01 13:43:58.324796] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.359 [2024-10-01 13:43:58.324831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.359 [2024-10-01 13:43:58.324850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.359 [2024-10-01 13:43:58.324901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.359 [2024-10-01 13:43:58.324927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.359 [2024-10-01 13:43:58.324944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.359 [2024-10-01 13:43:58.324992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.359 [2024-10-01 13:43:58.325021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.359 [2024-10-01 13:43:58.325066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.359 [2024-10-01 13:43:58.325088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.359 [2024-10-01 13:43:58.325104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.359 [2024-10-01 13:43:58.325122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.359 [2024-10-01 13:43:58.325137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.359 [2024-10-01 13:43:58.325151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.359 [2024-10-01 13:43:58.325183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.359 [2024-10-01 13:43:58.325203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.359 [2024-10-01 13:43:58.334999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.359 [2024-10-01 13:43:58.335091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.359 [2024-10-01 13:43:58.336179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.359 [2024-10-01 13:43:58.336232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.359 [2024-10-01 13:43:58.336255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.359 [2024-10-01 13:43:58.336312] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.359 [2024-10-01 13:43:58.336338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.359 [2024-10-01 13:43:58.336355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.359 [2024-10-01 13:43:58.336569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.359 [2024-10-01 13:43:58.336613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.359 [2024-10-01 13:43:58.336759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.359 [2024-10-01 13:43:58.336824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.359 [2024-10-01 13:43:58.336862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.359 [2024-10-01 13:43:58.336896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.359 [2024-10-01 13:43:58.336929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.359 [2024-10-01 13:43:58.336958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.359 [2024-10-01 13:43:58.338615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.359 [2024-10-01 13:43:58.338679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.359 [2024-10-01 13:43:58.345822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.359 [2024-10-01 13:43:58.345881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.359 [2024-10-01 13:43:58.345992] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.359 [2024-10-01 13:43:58.346025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.359 [2024-10-01 13:43:58.346044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.359 [2024-10-01 13:43:58.346096] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.359 [2024-10-01 13:43:58.346121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.359 [2024-10-01 13:43:58.346138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.360 [2024-10-01 13:43:58.346172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.360 [2024-10-01 13:43:58.346195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.360 [2024-10-01 13:43:58.346223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.360 [2024-10-01 13:43:58.346240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.360 [2024-10-01 13:43:58.346255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.360 [2024-10-01 13:43:58.346294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.360 [2024-10-01 13:43:58.346312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.360 [2024-10-01 13:43:58.346326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.360 [2024-10-01 13:43:58.346360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.360 [2024-10-01 13:43:58.346380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.360 [2024-10-01 13:43:58.356220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.360 [2024-10-01 13:43:58.356285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.360 [2024-10-01 13:43:58.356395] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.360 [2024-10-01 13:43:58.356428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.360 [2024-10-01 13:43:58.356447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.360 [2024-10-01 13:43:58.356498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.360 [2024-10-01 13:43:58.356524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.360 [2024-10-01 13:43:58.356558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.360 [2024-10-01 13:43:58.356596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.360 [2024-10-01 13:43:58.356620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.360 [2024-10-01 13:43:58.356647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.360 [2024-10-01 13:43:58.356665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.360 [2024-10-01 13:43:58.356680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.360 [2024-10-01 13:43:58.356697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.360 [2024-10-01 13:43:58.356713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.360 [2024-10-01 13:43:58.356726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.360 [2024-10-01 13:43:58.356758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.360 [2024-10-01 13:43:58.356778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.360 [2024-10-01 13:43:58.367485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.360 [2024-10-01 13:43:58.367565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.360 [2024-10-01 13:43:58.367687] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.360 [2024-10-01 13:43:58.367720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.360 [2024-10-01 13:43:58.367739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.360 [2024-10-01 13:43:58.367790] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.360 [2024-10-01 13:43:58.367815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.360 [2024-10-01 13:43:58.367831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.360 [2024-10-01 13:43:58.367919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.360 [2024-10-01 13:43:58.367949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.360 [2024-10-01 13:43:58.367978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.360 [2024-10-01 13:43:58.367997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.360 [2024-10-01 13:43:58.368012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.360 [2024-10-01 13:43:58.368030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.360 [2024-10-01 13:43:58.368046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.360 [2024-10-01 13:43:58.368059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.360 [2024-10-01 13:43:58.368091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.360 [2024-10-01 13:43:58.368111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.360 [2024-10-01 13:43:58.377985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.360 [2024-10-01 13:43:58.378077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.360 [2024-10-01 13:43:58.378209] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.360 [2024-10-01 13:43:58.378244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.361 [2024-10-01 13:43:58.378263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.361 [2024-10-01 13:43:58.378315] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.361 [2024-10-01 13:43:58.378340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.361 [2024-10-01 13:43:58.378357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.361 [2024-10-01 13:43:58.379303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.361 [2024-10-01 13:43:58.379354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.361 [2024-10-01 13:43:58.379568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.361 [2024-10-01 13:43:58.379605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.361 [2024-10-01 13:43:58.379624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.361 [2024-10-01 13:43:58.379642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.361 [2024-10-01 13:43:58.379658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.361 [2024-10-01 13:43:58.379673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.361 [2024-10-01 13:43:58.379788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.361 [2024-10-01 13:43:58.379821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.361 [2024-10-01 13:43:58.389048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.361 [2024-10-01 13:43:58.389107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.361 [2024-10-01 13:43:58.389242] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.361 [2024-10-01 13:43:58.389278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.361 [2024-10-01 13:43:58.389296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.361 [2024-10-01 13:43:58.389348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.361 [2024-10-01 13:43:58.389374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.361 [2024-10-01 13:43:58.389390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.361 [2024-10-01 13:43:58.389424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.361 [2024-10-01 13:43:58.389448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.361 [2024-10-01 13:43:58.389475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.361 [2024-10-01 13:43:58.389493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.361 [2024-10-01 13:43:58.389508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.361 [2024-10-01 13:43:58.389526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.361 [2024-10-01 13:43:58.389559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.361 [2024-10-01 13:43:58.389574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.361 [2024-10-01 13:43:58.389839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.361 [2024-10-01 13:43:58.389866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.361 [2024-10-01 13:43:58.399277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.361 [2024-10-01 13:43:58.399331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.361 [2024-10-01 13:43:58.399430] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.361 [2024-10-01 13:43:58.399462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.361 [2024-10-01 13:43:58.399481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.361 [2024-10-01 13:43:58.399531] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.361 [2024-10-01 13:43:58.399574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.361 [2024-10-01 13:43:58.399592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.361 [2024-10-01 13:43:58.399633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.361 [2024-10-01 13:43:58.399657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.361 [2024-10-01 13:43:58.399684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.361 [2024-10-01 13:43:58.399702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.361 [2024-10-01 13:43:58.399716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.361 [2024-10-01 13:43:58.399733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.361 [2024-10-01 13:43:58.399768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.361 [2024-10-01 13:43:58.399783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.362 [2024-10-01 13:43:58.399818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.362 [2024-10-01 13:43:58.399838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.362 [2024-10-01 13:43:58.410559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.362 [2024-10-01 13:43:58.410619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.362 [2024-10-01 13:43:58.410726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.362 [2024-10-01 13:43:58.410758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.362 [2024-10-01 13:43:58.410776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.362 [2024-10-01 13:43:58.410827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.362 [2024-10-01 13:43:58.410853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.362 [2024-10-01 13:43:58.410869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.362 [2024-10-01 13:43:58.410904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.362 [2024-10-01 13:43:58.410928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.362 [2024-10-01 13:43:58.410955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.362 [2024-10-01 13:43:58.410973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.362 [2024-10-01 13:43:58.410995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.362 [2024-10-01 13:43:58.411012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.362 [2024-10-01 13:43:58.411028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.362 [2024-10-01 13:43:58.411042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.362 [2024-10-01 13:43:58.411075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.362 [2024-10-01 13:43:58.411095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.362 [2024-10-01 13:43:58.420801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.362 [2024-10-01 13:43:58.420861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.362 [2024-10-01 13:43:58.420971] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.362 [2024-10-01 13:43:58.421003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.362 [2024-10-01 13:43:58.421022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.362 [2024-10-01 13:43:58.421072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.362 [2024-10-01 13:43:58.421097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.362 [2024-10-01 13:43:58.421113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.362 [2024-10-01 13:43:58.422049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.362 [2024-10-01 13:43:58.422123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.362 [2024-10-01 13:43:58.422349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.362 [2024-10-01 13:43:58.422378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.362 [2024-10-01 13:43:58.422395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.362 [2024-10-01 13:43:58.422413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.362 [2024-10-01 13:43:58.422429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.362 [2024-10-01 13:43:58.422444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.362 [2024-10-01 13:43:58.423753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.362 [2024-10-01 13:43:58.423798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.362 [2024-10-01 13:43:58.431955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.362 [2024-10-01 13:43:58.432012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.362 [2024-10-01 13:43:58.432124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.362 [2024-10-01 13:43:58.432158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.362 [2024-10-01 13:43:58.432176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.362 [2024-10-01 13:43:58.432228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.362 [2024-10-01 13:43:58.432254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.362 [2024-10-01 13:43:58.432272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.362 [2024-10-01 13:43:58.432306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.362 [2024-10-01 13:43:58.432330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.362 [2024-10-01 13:43:58.432357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.362 [2024-10-01 13:43:58.432375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.362 [2024-10-01 13:43:58.432390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.362 [2024-10-01 13:43:58.432407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.362 [2024-10-01 13:43:58.432423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.362 [2024-10-01 13:43:58.432437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.362 [2024-10-01 13:43:58.432721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.362 [2024-10-01 13:43:58.432750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.362 [2024-10-01 13:43:58.442179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.362 [2024-10-01 13:43:58.442257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.362 [2024-10-01 13:43:58.442357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.362 [2024-10-01 13:43:58.442394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.362 [2024-10-01 13:43:58.442435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.362 [2024-10-01 13:43:58.442509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.362 [2024-10-01 13:43:58.442553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.362 [2024-10-01 13:43:58.442574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.363 [2024-10-01 13:43:58.442594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.363 [2024-10-01 13:43:58.442628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.363 [2024-10-01 13:43:58.442650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.363 [2024-10-01 13:43:58.442664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.363 [2024-10-01 13:43:58.442678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.363 [2024-10-01 13:43:58.442711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.363 [2024-10-01 13:43:58.442732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.363 [2024-10-01 13:43:58.442746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.363 [2024-10-01 13:43:58.442760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.363 [2024-10-01 13:43:58.442790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.363 [2024-10-01 13:43:58.453632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.363 [2024-10-01 13:43:58.453684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.363 [2024-10-01 13:43:58.453787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.363 [2024-10-01 13:43:58.453819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.363 [2024-10-01 13:43:58.453838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.363 [2024-10-01 13:43:58.453888] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.363 [2024-10-01 13:43:58.453913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.363 [2024-10-01 13:43:58.453929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.363 [2024-10-01 13:43:58.453963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.363 [2024-10-01 13:43:58.453986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.363 [2024-10-01 13:43:58.454014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.363 [2024-10-01 13:43:58.454035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.363 [2024-10-01 13:43:58.454049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.363 [2024-10-01 13:43:58.454067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.363 [2024-10-01 13:43:58.454082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.363 [2024-10-01 13:43:58.454112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.363 [2024-10-01 13:43:58.454148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.363 [2024-10-01 13:43:58.454169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.363 [2024-10-01 13:43:58.463991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.363 [2024-10-01 13:43:58.464060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.363 [2024-10-01 13:43:58.464173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.363 [2024-10-01 13:43:58.464207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.363 [2024-10-01 13:43:58.464226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.363 [2024-10-01 13:43:58.464280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.363 [2024-10-01 13:43:58.464305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.363 [2024-10-01 13:43:58.464322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.363 [2024-10-01 13:43:58.465271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.363 [2024-10-01 13:43:58.465318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.363 [2024-10-01 13:43:58.465527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.363 [2024-10-01 13:43:58.465571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.363 [2024-10-01 13:43:58.465589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.363 [2024-10-01 13:43:58.465607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.363 [2024-10-01 13:43:58.465624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.363 [2024-10-01 13:43:58.465638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.363 [2024-10-01 13:43:58.465720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.363 [2024-10-01 13:43:58.465743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.363 [2024-10-01 13:43:58.475069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.363 [2024-10-01 13:43:58.475125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.363 [2024-10-01 13:43:58.475239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.363 [2024-10-01 13:43:58.475272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.363 [2024-10-01 13:43:58.475291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.363 [2024-10-01 13:43:58.475342] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.363 [2024-10-01 13:43:58.475367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.363 [2024-10-01 13:43:58.475385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.363 [2024-10-01 13:43:58.475418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.363 [2024-10-01 13:43:58.475442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.363 [2024-10-01 13:43:58.475499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.363 [2024-10-01 13:43:58.475518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.363 [2024-10-01 13:43:58.475552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.363 [2024-10-01 13:43:58.475585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.363 [2024-10-01 13:43:58.475607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.363 [2024-10-01 13:43:58.475622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.363 [2024-10-01 13:43:58.475903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.364 [2024-10-01 13:43:58.475932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.364 [2024-10-01 13:43:58.485490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.364 [2024-10-01 13:43:58.485589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.364 [2024-10-01 13:43:58.485732] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.364 [2024-10-01 13:43:58.485765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.364 [2024-10-01 13:43:58.485785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.364 [2024-10-01 13:43:58.485837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.364 [2024-10-01 13:43:58.485862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.364 [2024-10-01 13:43:58.485879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.364 [2024-10-01 13:43:58.485914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.364 [2024-10-01 13:43:58.485939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.364 [2024-10-01 13:43:58.485966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.364 [2024-10-01 13:43:58.485984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.364 [2024-10-01 13:43:58.486000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.364 [2024-10-01 13:43:58.486017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.364 [2024-10-01 13:43:58.486032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.364 [2024-10-01 13:43:58.486046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.364 [2024-10-01 13:43:58.486078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.364 [2024-10-01 13:43:58.486098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.364 [2024-10-01 13:43:58.497792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.364 [2024-10-01 13:43:58.497865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.364 [2024-10-01 13:43:58.497989] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.364 [2024-10-01 13:43:58.498022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.364 [2024-10-01 13:43:58.498041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.364 [2024-10-01 13:43:58.498126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.364 [2024-10-01 13:43:58.498153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.364 [2024-10-01 13:43:58.498170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.364 [2024-10-01 13:43:58.498205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.364 [2024-10-01 13:43:58.498230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.364 [2024-10-01 13:43:58.498257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.364 [2024-10-01 13:43:58.498275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.364 [2024-10-01 13:43:58.498290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.364 [2024-10-01 13:43:58.498308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.364 [2024-10-01 13:43:58.498324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.364 [2024-10-01 13:43:58.498337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.364 [2024-10-01 13:43:58.498370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.364 [2024-10-01 13:43:58.498390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.364 [2024-10-01 13:43:58.508849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.364 [2024-10-01 13:43:58.508941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.364 [2024-10-01 13:43:58.509089] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.364 [2024-10-01 13:43:58.509125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.364 [2024-10-01 13:43:58.509144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.364 [2024-10-01 13:43:58.509197] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.364 [2024-10-01 13:43:58.509222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.364 [2024-10-01 13:43:58.509239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.364 [2024-10-01 13:43:58.510206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.364 [2024-10-01 13:43:58.510253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.364 [2024-10-01 13:43:58.510477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.364 [2024-10-01 13:43:58.510516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.364 [2024-10-01 13:43:58.510549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.364 [2024-10-01 13:43:58.510571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.364 [2024-10-01 13:43:58.510588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.364 [2024-10-01 13:43:58.510602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.364 [2024-10-01 13:43:58.510743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.364 [2024-10-01 13:43:58.510768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.364 [2024-10-01 13:43:58.520110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.364 [2024-10-01 13:43:58.520197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.364 [2024-10-01 13:43:58.520332] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.364 [2024-10-01 13:43:58.520367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.364 [2024-10-01 13:43:58.520386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.364 [2024-10-01 13:43:58.520438] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.364 [2024-10-01 13:43:58.520464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.364 [2024-10-01 13:43:58.520481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.364 [2024-10-01 13:43:58.520516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.364 [2024-10-01 13:43:58.520559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.364 [2024-10-01 13:43:58.520828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.364 [2024-10-01 13:43:58.520857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.365 [2024-10-01 13:43:58.520873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.365 [2024-10-01 13:43:58.520892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.365 [2024-10-01 13:43:58.520907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.365 [2024-10-01 13:43:58.520921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.365 [2024-10-01 13:43:58.521068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.365 [2024-10-01 13:43:58.521095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.365 [2024-10-01 13:43:58.530305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.365 [2024-10-01 13:43:58.530358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.365 [2024-10-01 13:43:58.530464] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.365 [2024-10-01 13:43:58.530497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.365 [2024-10-01 13:43:58.530516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.365 [2024-10-01 13:43:58.530584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.365 [2024-10-01 13:43:58.530612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.365 [2024-10-01 13:43:58.530629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.365 [2024-10-01 13:43:58.530663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.365 [2024-10-01 13:43:58.530687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.365 [2024-10-01 13:43:58.530714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.365 [2024-10-01 13:43:58.530762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.365 [2024-10-01 13:43:58.530779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.365 [2024-10-01 13:43:58.530796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.365 [2024-10-01 13:43:58.530812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.365 [2024-10-01 13:43:58.530825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.365 [2024-10-01 13:43:58.530858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.365 [2024-10-01 13:43:58.530878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.365 [2024-10-01 13:43:58.541698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.365 [2024-10-01 13:43:58.541759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.365 [2024-10-01 13:43:58.541869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.365 [2024-10-01 13:43:58.541903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.365 [2024-10-01 13:43:58.541922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.365 [2024-10-01 13:43:58.541974] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.365 [2024-10-01 13:43:58.541999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.365 [2024-10-01 13:43:58.542015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.365 [2024-10-01 13:43:58.542049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.365 [2024-10-01 13:43:58.542073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.365 [2024-10-01 13:43:58.542101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.365 [2024-10-01 13:43:58.542119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.365 [2024-10-01 13:43:58.542134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.365 [2024-10-01 13:43:58.542153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.365 [2024-10-01 13:43:58.542169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.365 [2024-10-01 13:43:58.542182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.365 [2024-10-01 13:43:58.542215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.365 [2024-10-01 13:43:58.542235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.365 [2024-10-01 13:43:58.551855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.365 [2024-10-01 13:43:58.551993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.365 [2024-10-01 13:43:58.552111] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.365 [2024-10-01 13:43:58.552145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.365 [2024-10-01 13:43:58.552164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.365 [2024-10-01 13:43:58.553189] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.365 [2024-10-01 13:43:58.553236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.365 [2024-10-01 13:43:58.553258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.365 [2024-10-01 13:43:58.553280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.365 [2024-10-01 13:43:58.553499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.365 [2024-10-01 13:43:58.553530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.365 [2024-10-01 13:43:58.553564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.365 [2024-10-01 13:43:58.553581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.365 [2024-10-01 13:43:58.553700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.365 [2024-10-01 13:43:58.553723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.365 [2024-10-01 13:43:58.553738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.365 [2024-10-01 13:43:58.553752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.365 [2024-10-01 13:43:58.554992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.365 [2024-10-01 13:43:58.563186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.366 [2024-10-01 13:43:58.563266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.366 [2024-10-01 13:43:58.563425] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.366 [2024-10-01 13:43:58.563462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.366 [2024-10-01 13:43:58.563481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.366 [2024-10-01 13:43:58.563555] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.366 [2024-10-01 13:43:58.563600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.366 [2024-10-01 13:43:58.563622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.366 [2024-10-01 13:43:58.563662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.366 [2024-10-01 13:43:58.563688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.366 [2024-10-01 13:43:58.563715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.366 [2024-10-01 13:43:58.563733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.366 [2024-10-01 13:43:58.563749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.366 [2024-10-01 13:43:58.563767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.366 [2024-10-01 13:43:58.563783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.366 [2024-10-01 13:43:58.563796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.366 [2024-10-01 13:43:58.563829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.366 [2024-10-01 13:43:58.563849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.366 [2024-10-01 13:43:58.575184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.366 [2024-10-01 13:43:58.575305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.366 [2024-10-01 13:43:58.575461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.366 [2024-10-01 13:43:58.575501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.366 [2024-10-01 13:43:58.575522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.366 [2024-10-01 13:43:58.575595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.366 [2024-10-01 13:43:58.575633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.366 [2024-10-01 13:43:58.575653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.366 [2024-10-01 13:43:58.575691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.366 [2024-10-01 13:43:58.575718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.366 [2024-10-01 13:43:58.575769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.366 [2024-10-01 13:43:58.575794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.366 [2024-10-01 13:43:58.575810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.366 [2024-10-01 13:43:58.575829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.366 [2024-10-01 13:43:58.575845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.366 [2024-10-01 13:43:58.575858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.366 [2024-10-01 13:43:58.575909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.366 [2024-10-01 13:43:58.575932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.366 [2024-10-01 13:43:58.586399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.366 [2024-10-01 13:43:58.586498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.366 [2024-10-01 13:43:58.586654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.366 [2024-10-01 13:43:58.586692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.366 [2024-10-01 13:43:58.586712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.366 [2024-10-01 13:43:58.586765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.366 [2024-10-01 13:43:58.586791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.366 [2024-10-01 13:43:58.586807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.366 [2024-10-01 13:43:58.586844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.366 [2024-10-01 13:43:58.586868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.366 [2024-10-01 13:43:58.586896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.366 [2024-10-01 13:43:58.586914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.366 [2024-10-01 13:43:58.586963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.366 [2024-10-01 13:43:58.586983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.366 [2024-10-01 13:43:58.586999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.366 [2024-10-01 13:43:58.587012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.366 [2024-10-01 13:43:58.587059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.366 [2024-10-01 13:43:58.587092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.366 [2024-10-01 13:43:58.596797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.366 [2024-10-01 13:43:58.596894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.366 [2024-10-01 13:43:58.597031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.366 [2024-10-01 13:43:58.597068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.366 [2024-10-01 13:43:58.597087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.366 [2024-10-01 13:43:58.597140] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.366 [2024-10-01 13:43:58.597165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.366 [2024-10-01 13:43:58.597182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.366 [2024-10-01 13:43:58.598140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.366 [2024-10-01 13:43:58.598187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.366 [2024-10-01 13:43:58.598386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.366 [2024-10-01 13:43:58.598432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.367 [2024-10-01 13:43:58.598451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.367 [2024-10-01 13:43:58.598470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.367 [2024-10-01 13:43:58.598486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.367 [2024-10-01 13:43:58.598500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.367 [2024-10-01 13:43:58.598631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.367 [2024-10-01 13:43:58.598656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.367 [2024-10-01 13:43:58.607806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.367 [2024-10-01 13:43:58.607864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.367 [2024-10-01 13:43:58.607988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.367 [2024-10-01 13:43:58.608021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.367 [2024-10-01 13:43:58.608039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.367 [2024-10-01 13:43:58.608090] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.367 [2024-10-01 13:43:58.608116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.367 [2024-10-01 13:43:58.608160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.367 [2024-10-01 13:43:58.608198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.367 [2024-10-01 13:43:58.608221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.367 [2024-10-01 13:43:58.608249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.367 [2024-10-01 13:43:58.608267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.367 [2024-10-01 13:43:58.608282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.367 [2024-10-01 13:43:58.608299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.367 [2024-10-01 13:43:58.608315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.367 [2024-10-01 13:43:58.608329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.367 [2024-10-01 13:43:58.608361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.367 [2024-10-01 13:43:58.608381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.367 [2024-10-01 13:43:58.619831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.367 [2024-10-01 13:43:58.619935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.367 [2024-10-01 13:43:58.620106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.367 [2024-10-01 13:43:58.620143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.367 [2024-10-01 13:43:58.620163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.367 [2024-10-01 13:43:58.620215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.367 [2024-10-01 13:43:58.620240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.367 [2024-10-01 13:43:58.620257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.367 [2024-10-01 13:43:58.620293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.367 [2024-10-01 13:43:58.620317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.367 [2024-10-01 13:43:58.620344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.367 [2024-10-01 13:43:58.620363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.367 [2024-10-01 13:43:58.620378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.367 [2024-10-01 13:43:58.620396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.367 [2024-10-01 13:43:58.620412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.367 [2024-10-01 13:43:58.620426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.367 [2024-10-01 13:43:58.620459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.367 [2024-10-01 13:43:58.620481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.367 [2024-10-01 13:43:58.631707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.367 [2024-10-01 13:43:58.631804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.367 [2024-10-01 13:43:58.632085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.368 [2024-10-01 13:43:58.632120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.368 [2024-10-01 13:43:58.632139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.368 [2024-10-01 13:43:58.632191] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.368 [2024-10-01 13:43:58.632217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.368 [2024-10-01 13:43:58.632233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.368 [2024-10-01 13:43:58.632276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.368 [2024-10-01 13:43:58.632302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.368 [2024-10-01 13:43:58.632330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.368 [2024-10-01 13:43:58.632349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.368 [2024-10-01 13:43:58.632365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.368 [2024-10-01 13:43:58.632384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.368 [2024-10-01 13:43:58.632400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.368 [2024-10-01 13:43:58.632414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.368 [2024-10-01 13:43:58.632447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.368 [2024-10-01 13:43:58.632467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.368 [2024-10-01 13:43:58.643026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.368 [2024-10-01 13:43:58.643129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.368 [2024-10-01 13:43:58.643269] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.368 [2024-10-01 13:43:58.643306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.368 [2024-10-01 13:43:58.643325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.368 [2024-10-01 13:43:58.643377] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.368 [2024-10-01 13:43:58.643403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.368 [2024-10-01 13:43:58.643422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.368 [2024-10-01 13:43:58.644439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.368 [2024-10-01 13:43:58.644489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.368 [2024-10-01 13:43:58.644718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.368 [2024-10-01 13:43:58.644766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.368 [2024-10-01 13:43:58.644786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.368 [2024-10-01 13:43:58.644836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.368 [2024-10-01 13:43:58.644855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.368 [2024-10-01 13:43:58.644869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.368 [2024-10-01 13:43:58.644988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.368 [2024-10-01 13:43:58.645012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.368 [2024-10-01 13:43:58.654374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.368 [2024-10-01 13:43:58.654460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.368 [2024-10-01 13:43:58.654617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.368 [2024-10-01 13:43:58.654653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.368 [2024-10-01 13:43:58.654673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.368 [2024-10-01 13:43:58.654726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.368 [2024-10-01 13:43:58.654753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.368 [2024-10-01 13:43:58.654770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.368 [2024-10-01 13:43:58.654820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.368 [2024-10-01 13:43:58.654859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.368 [2024-10-01 13:43:58.654902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.368 [2024-10-01 13:43:58.654925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.368 [2024-10-01 13:43:58.654942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.368 [2024-10-01 13:43:58.654959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.368 [2024-10-01 13:43:58.654975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.368 [2024-10-01 13:43:58.654989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.368 [2024-10-01 13:43:58.655259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.368 [2024-10-01 13:43:58.655297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.368 8416.67 IOPS, 32.88 MiB/s [2024-10-01 13:43:58.667431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.368 [2024-10-01 13:43:58.667502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.368 [2024-10-01 13:43:58.668651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.368 [2024-10-01 13:43:58.668701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.368 [2024-10-01 13:43:58.668725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.368 [2024-10-01 13:43:58.668791] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.368 [2024-10-01 13:43:58.668817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.368 [2024-10-01 13:43:58.668834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.368 [2024-10-01 13:43:58.669748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.368 [2024-10-01 13:43:58.669798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.368 [2024-10-01 13:43:58.669985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.368 [2024-10-01 13:43:58.670021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.368 [2024-10-01 13:43:58.670040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.368 [2024-10-01 13:43:58.670059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.368 [2024-10-01 13:43:58.670076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.368 [2024-10-01 13:43:58.670089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.368 [2024-10-01 13:43:58.670204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.368 [2024-10-01 13:43:58.670227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.369 [2024-10-01 13:43:58.679284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.369 [2024-10-01 13:43:58.679375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.369 [2024-10-01 13:43:58.679509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.369 [2024-10-01 13:43:58.679566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.369 [2024-10-01 13:43:58.679587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.369 [2024-10-01 13:43:58.679641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.369 [2024-10-01 13:43:58.679667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.369 [2024-10-01 13:43:58.679684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.369 [2024-10-01 13:43:58.679720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.369 [2024-10-01 13:43:58.679745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.369 [2024-10-01 13:43:58.679772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.369 [2024-10-01 13:43:58.679791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.369 [2024-10-01 13:43:58.679806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.369 [2024-10-01 13:43:58.679824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.369 [2024-10-01 13:43:58.679840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.369 [2024-10-01 13:43:58.679854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.369 [2024-10-01 13:43:58.679900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.369 [2024-10-01 13:43:58.679923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.369 [2024-10-01 13:43:58.690475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.369 [2024-10-01 13:43:58.690586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.369 [2024-10-01 13:43:58.690909] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.369 [2024-10-01 13:43:58.690957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.369 [2024-10-01 13:43:58.690979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.369 [2024-10-01 13:43:58.691034] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.369 [2024-10-01 13:43:58.691060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.369 [2024-10-01 13:43:58.691077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.369 [2024-10-01 13:43:58.691139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.369 [2024-10-01 13:43:58.691170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.369 [2024-10-01 13:43:58.691199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.369 [2024-10-01 13:43:58.691217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.369 [2024-10-01 13:43:58.691233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.369 [2024-10-01 13:43:58.691251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.369 [2024-10-01 13:43:58.691267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.369 [2024-10-01 13:43:58.691281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.369 [2024-10-01 13:43:58.691314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.369 [2024-10-01 13:43:58.691335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.369 [2024-10-01 13:43:58.701099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.369 [2024-10-01 13:43:58.701186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.369 [2024-10-01 13:43:58.701321] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.369 [2024-10-01 13:43:58.701357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.369 [2024-10-01 13:43:58.701376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.369 [2024-10-01 13:43:58.701427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.369 [2024-10-01 13:43:58.701453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.369 [2024-10-01 13:43:58.701470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.369 [2024-10-01 13:43:58.702420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.369 [2024-10-01 13:43:58.702465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.369 [2024-10-01 13:43:58.702679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.369 [2024-10-01 13:43:58.702708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.369 [2024-10-01 13:43:58.702725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.369 [2024-10-01 13:43:58.702744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.369 [2024-10-01 13:43:58.702785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.369 [2024-10-01 13:43:58.702800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.369 [2024-10-01 13:43:58.702922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.369 [2024-10-01 13:43:58.702945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.369 [2024-10-01 13:43:58.712244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.369 [2024-10-01 13:43:58.712338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.369 [2024-10-01 13:43:58.712473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.369 [2024-10-01 13:43:58.712509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.369 [2024-10-01 13:43:58.712529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.369 [2024-10-01 13:43:58.712602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.369 [2024-10-01 13:43:58.712629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.369 [2024-10-01 13:43:58.712646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.369 [2024-10-01 13:43:58.712683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.369 [2024-10-01 13:43:58.712708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.369 [2024-10-01 13:43:58.712979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.369 [2024-10-01 13:43:58.713007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.369 [2024-10-01 13:43:58.713023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.369 [2024-10-01 13:43:58.713041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.369 [2024-10-01 13:43:58.713057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.369 [2024-10-01 13:43:58.713071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.369 [2024-10-01 13:43:58.713222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.369 [2024-10-01 13:43:58.713248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.369 [2024-10-01 13:43:58.722630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.369 [2024-10-01 13:43:58.722726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.370 [2024-10-01 13:43:58.722858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.370 [2024-10-01 13:43:58.722893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.370 [2024-10-01 13:43:58.722912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.370 [2024-10-01 13:43:58.722963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.370 [2024-10-01 13:43:58.722988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.370 [2024-10-01 13:43:58.723004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.370 [2024-10-01 13:43:58.723073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.370 [2024-10-01 13:43:58.723100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.370 [2024-10-01 13:43:58.723128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.370 [2024-10-01 13:43:58.723146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.370 [2024-10-01 13:43:58.723162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.370 [2024-10-01 13:43:58.723179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.370 [2024-10-01 13:43:58.723195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.370 [2024-10-01 13:43:58.723209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.370 [2024-10-01 13:43:58.723241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.370 [2024-10-01 13:43:58.723261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.370 [2024-10-01 13:43:58.734111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.370 [2024-10-01 13:43:58.734247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.370 [2024-10-01 13:43:58.734462] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.370 [2024-10-01 13:43:58.734520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.370 [2024-10-01 13:43:58.734577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.370 [2024-10-01 13:43:58.734658] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.370 [2024-10-01 13:43:58.734687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.370 [2024-10-01 13:43:58.734704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.370 [2024-10-01 13:43:58.734746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.370 [2024-10-01 13:43:58.734771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.370 [2024-10-01 13:43:58.734798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.370 [2024-10-01 13:43:58.734817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.370 [2024-10-01 13:43:58.734833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.370 [2024-10-01 13:43:58.734851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.370 [2024-10-01 13:43:58.734867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.370 [2024-10-01 13:43:58.734881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.370 [2024-10-01 13:43:58.734914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.370 [2024-10-01 13:43:58.734935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.370 [2024-10-01 13:43:58.744794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.370 [2024-10-01 13:43:58.744882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.370 [2024-10-01 13:43:58.745019] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.370 [2024-10-01 13:43:58.745084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.370 [2024-10-01 13:43:58.745106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.370 [2024-10-01 13:43:58.745161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.370 [2024-10-01 13:43:58.745187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.370 [2024-10-01 13:43:58.745203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.370 [2024-10-01 13:43:58.746163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.370 [2024-10-01 13:43:58.746211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.370 [2024-10-01 13:43:58.746416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.370 [2024-10-01 13:43:58.746454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.370 [2024-10-01 13:43:58.746473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.370 [2024-10-01 13:43:58.746492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.370 [2024-10-01 13:43:58.746508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.370 [2024-10-01 13:43:58.746521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.370 [2024-10-01 13:43:58.746653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.370 [2024-10-01 13:43:58.746678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.370 [2024-10-01 13:43:58.755840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.370 [2024-10-01 13:43:58.755928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.370 [2024-10-01 13:43:58.756060] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.370 [2024-10-01 13:43:58.756095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.370 [2024-10-01 13:43:58.756113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.370 [2024-10-01 13:43:58.756166] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.370 [2024-10-01 13:43:58.756191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.370 [2024-10-01 13:43:58.756208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.370 [2024-10-01 13:43:58.756244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.370 [2024-10-01 13:43:58.756268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.370 [2024-10-01 13:43:58.756295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.370 [2024-10-01 13:43:58.756313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.370 [2024-10-01 13:43:58.756329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.371 [2024-10-01 13:43:58.756347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.371 [2024-10-01 13:43:58.756362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.371 [2024-10-01 13:43:58.756401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.371 [2024-10-01 13:43:58.756697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.371 [2024-10-01 13:43:58.756726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.371 [2024-10-01 13:43:58.766368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.371 [2024-10-01 13:43:58.766460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.371 [2024-10-01 13:43:58.766625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.371 [2024-10-01 13:43:58.766661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.371 [2024-10-01 13:43:58.766680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.371 [2024-10-01 13:43:58.766733] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.371 [2024-10-01 13:43:58.766758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.371 [2024-10-01 13:43:58.766775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.371 [2024-10-01 13:43:58.766812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.371 [2024-10-01 13:43:58.766837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.371 [2024-10-01 13:43:58.766864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.371 [2024-10-01 13:43:58.766884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.371 [2024-10-01 13:43:58.766900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.371 [2024-10-01 13:43:58.766918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.371 [2024-10-01 13:43:58.766934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.371 [2024-10-01 13:43:58.766947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.371 [2024-10-01 13:43:58.766980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.371 [2024-10-01 13:43:58.767000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.371 [2024-10-01 13:43:58.777676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.371 [2024-10-01 13:43:58.777739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.371 [2024-10-01 13:43:58.777858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.371 [2024-10-01 13:43:58.777892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.371 [2024-10-01 13:43:58.777911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.371 [2024-10-01 13:43:58.777961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.371 [2024-10-01 13:43:58.777986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.371 [2024-10-01 13:43:58.778002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.371 [2024-10-01 13:43:58.778053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.371 [2024-10-01 13:43:58.778109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.371 [2024-10-01 13:43:58.778142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.371 [2024-10-01 13:43:58.778161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.371 [2024-10-01 13:43:58.778176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.371 [2024-10-01 13:43:58.778194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.371 [2024-10-01 13:43:58.778210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.371 [2024-10-01 13:43:58.778224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.371 [2024-10-01 13:43:58.778257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.371 [2024-10-01 13:43:58.778278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.371 [2024-10-01 13:43:58.788010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.371 [2024-10-01 13:43:58.788061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.371 [2024-10-01 13:43:58.788161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.371 [2024-10-01 13:43:58.788193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.371 [2024-10-01 13:43:58.788211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.371 [2024-10-01 13:43:58.788262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.371 [2024-10-01 13:43:58.788287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.371 [2024-10-01 13:43:58.788303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.371 [2024-10-01 13:43:58.789235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.371 [2024-10-01 13:43:58.789285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.371 [2024-10-01 13:43:58.789486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.371 [2024-10-01 13:43:58.789526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.371 [2024-10-01 13:43:58.789559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.371 [2024-10-01 13:43:58.789579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.371 [2024-10-01 13:43:58.789595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.371 [2024-10-01 13:43:58.789609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.371 [2024-10-01 13:43:58.790877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.371 [2024-10-01 13:43:58.790915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.371 [2024-10-01 13:43:58.799037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.372 [2024-10-01 13:43:58.799114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.372 [2024-10-01 13:43:58.799241] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.372 [2024-10-01 13:43:58.799275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.372 [2024-10-01 13:43:58.799327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.372 [2024-10-01 13:43:58.799382] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.372 [2024-10-01 13:43:58.799408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.372 [2024-10-01 13:43:58.799425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.372 [2024-10-01 13:43:58.799462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.372 [2024-10-01 13:43:58.799486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.372 [2024-10-01 13:43:58.799513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.372 [2024-10-01 13:43:58.799531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.372 [2024-10-01 13:43:58.799565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.372 [2024-10-01 13:43:58.799582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.372 [2024-10-01 13:43:58.799598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.372 [2024-10-01 13:43:58.799611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.372 [2024-10-01 13:43:58.799902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.372 [2024-10-01 13:43:58.799931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.372 [2024-10-01 13:43:58.809398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.372 [2024-10-01 13:43:58.809489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.372 [2024-10-01 13:43:58.809651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.372 [2024-10-01 13:43:58.809686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.372 [2024-10-01 13:43:58.809706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.372 [2024-10-01 13:43:58.809759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.372 [2024-10-01 13:43:58.809784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.372 [2024-10-01 13:43:58.809801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.372 [2024-10-01 13:43:58.809837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.372 [2024-10-01 13:43:58.809861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.372 [2024-10-01 13:43:58.809889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.372 [2024-10-01 13:43:58.809908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.372 [2024-10-01 13:43:58.809924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.372 [2024-10-01 13:43:58.809941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.372 [2024-10-01 13:43:58.809956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.372 [2024-10-01 13:43:58.809970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.372 [2024-10-01 13:43:58.810029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.372 [2024-10-01 13:43:58.810052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.372 [2024-10-01 13:43:58.820813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.372 [2024-10-01 13:43:58.820865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.372 [2024-10-01 13:43:58.821128] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.372 [2024-10-01 13:43:58.821173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.372 [2024-10-01 13:43:58.821194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.372 [2024-10-01 13:43:58.821247] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.372 [2024-10-01 13:43:58.821273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.372 [2024-10-01 13:43:58.821289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.372 [2024-10-01 13:43:58.821332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.372 [2024-10-01 13:43:58.821357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.372 [2024-10-01 13:43:58.821385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.372 [2024-10-01 13:43:58.821403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.372 [2024-10-01 13:43:58.821417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.372 [2024-10-01 13:43:58.821435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.372 [2024-10-01 13:43:58.821450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.372 [2024-10-01 13:43:58.821464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.372 [2024-10-01 13:43:58.821500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.372 [2024-10-01 13:43:58.821520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.372 [2024-10-01 13:43:58.830951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.372 [2024-10-01 13:43:58.831002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.372 [2024-10-01 13:43:58.831100] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.372 [2024-10-01 13:43:58.831132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.373 [2024-10-01 13:43:58.831150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.373 [2024-10-01 13:43:58.831200] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.373 [2024-10-01 13:43:58.831225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.373 [2024-10-01 13:43:58.831242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.373 [2024-10-01 13:43:58.832507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.373 [2024-10-01 13:43:58.832562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.373 [2024-10-01 13:43:58.832808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.373 [2024-10-01 13:43:58.832847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.373 [2024-10-01 13:43:58.832866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.373 [2024-10-01 13:43:58.832884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.373 [2024-10-01 13:43:58.832900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.373 [2024-10-01 13:43:58.832913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.373 [2024-10-01 13:43:58.833849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.373 [2024-10-01 13:43:58.833888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.373 [2024-10-01 13:43:58.841893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.373 [2024-10-01 13:43:58.841947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.373 [2024-10-01 13:43:58.842217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.373 [2024-10-01 13:43:58.842261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.373 [2024-10-01 13:43:58.842282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.373 [2024-10-01 13:43:58.842334] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.373 [2024-10-01 13:43:58.842360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.373 [2024-10-01 13:43:58.842377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.373 [2024-10-01 13:43:58.843389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.373 [2024-10-01 13:43:58.843434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.373 [2024-10-01 13:43:58.844094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.373 [2024-10-01 13:43:58.844141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.373 [2024-10-01 13:43:58.844159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.373 [2024-10-01 13:43:58.844177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.373 [2024-10-01 13:43:58.844193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.373 [2024-10-01 13:43:58.844207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.373 [2024-10-01 13:43:58.844528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.373 [2024-10-01 13:43:58.844579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.373 [2024-10-01 13:43:58.854014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.373 [2024-10-01 13:43:58.854064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.373 [2024-10-01 13:43:58.854168] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.373 [2024-10-01 13:43:58.854200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.373 [2024-10-01 13:43:58.854219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.373 [2024-10-01 13:43:58.854291] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.373 [2024-10-01 13:43:58.854318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.373 [2024-10-01 13:43:58.854335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.373 [2024-10-01 13:43:58.854381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.373 [2024-10-01 13:43:58.854406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.373 [2024-10-01 13:43:58.854433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.373 [2024-10-01 13:43:58.854451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.373 [2024-10-01 13:43:58.854466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.373 [2024-10-01 13:43:58.854483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.373 [2024-10-01 13:43:58.854498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.373 [2024-10-01 13:43:58.854512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.373 [2024-10-01 13:43:58.854561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.373 [2024-10-01 13:43:58.854584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.373 [2024-10-01 13:43:58.865232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.373 [2024-10-01 13:43:58.865284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.373 [2024-10-01 13:43:58.865390] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.373 [2024-10-01 13:43:58.865423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.373 [2024-10-01 13:43:58.865441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.373 [2024-10-01 13:43:58.865492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.374 [2024-10-01 13:43:58.865517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.374 [2024-10-01 13:43:58.865549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.374 [2024-10-01 13:43:58.865589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.374 [2024-10-01 13:43:58.865613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.374 [2024-10-01 13:43:58.865658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.374 [2024-10-01 13:43:58.865681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.374 [2024-10-01 13:43:58.865695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.374 [2024-10-01 13:43:58.865713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.374 [2024-10-01 13:43:58.865729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.374 [2024-10-01 13:43:58.865743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.374 [2024-10-01 13:43:58.865775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.374 [2024-10-01 13:43:58.865808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.374 [2024-10-01 13:43:58.875464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.374 [2024-10-01 13:43:58.875516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.374 [2024-10-01 13:43:58.875629] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.374 [2024-10-01 13:43:58.875661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.374 [2024-10-01 13:43:58.875679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.374 [2024-10-01 13:43:58.875729] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.374 [2024-10-01 13:43:58.875754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.374 [2024-10-01 13:43:58.875770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.374 [2024-10-01 13:43:58.876709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.374 [2024-10-01 13:43:58.876754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.374 [2024-10-01 13:43:58.876942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.374 [2024-10-01 13:43:58.876970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.374 [2024-10-01 13:43:58.876986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.374 [2024-10-01 13:43:58.877004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.374 [2024-10-01 13:43:58.877019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.374 [2024-10-01 13:43:58.877032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.374 [2024-10-01 13:43:58.878301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.374 [2024-10-01 13:43:58.878340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.374 [2024-10-01 13:43:58.886318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.374 [2024-10-01 13:43:58.886368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.374 [2024-10-01 13:43:58.886467] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.374 [2024-10-01 13:43:58.886499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.375 [2024-10-01 13:43:58.886517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.375 [2024-10-01 13:43:58.886586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.375 [2024-10-01 13:43:58.886614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.375 [2024-10-01 13:43:58.886630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.375 [2024-10-01 13:43:58.886665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.375 [2024-10-01 13:43:58.886689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.375 [2024-10-01 13:43:58.886722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.375 [2024-10-01 13:43:58.886740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.375 [2024-10-01 13:43:58.886772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.375 [2024-10-01 13:43:58.886791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.375 [2024-10-01 13:43:58.886808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.375 [2024-10-01 13:43:58.886821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.375 [2024-10-01 13:43:58.887085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.375 [2024-10-01 13:43:58.887113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.375 [2024-10-01 13:43:58.896517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.375 [2024-10-01 13:43:58.896582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.375 [2024-10-01 13:43:58.896680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.375 [2024-10-01 13:43:58.896712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.375 [2024-10-01 13:43:58.896730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.375 [2024-10-01 13:43:58.896786] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.375 [2024-10-01 13:43:58.896812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.375 [2024-10-01 13:43:58.896828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.375 [2024-10-01 13:43:58.896861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.375 [2024-10-01 13:43:58.896885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.375 [2024-10-01 13:43:58.896912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.375 [2024-10-01 13:43:58.896931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.375 [2024-10-01 13:43:58.896945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.375 [2024-10-01 13:43:58.896962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.375 [2024-10-01 13:43:58.896977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.375 [2024-10-01 13:43:58.896991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.375 [2024-10-01 13:43:58.897022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.375 [2024-10-01 13:43:58.897042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.375 [2024-10-01 13:43:58.907695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.375 [2024-10-01 13:43:58.907745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.375 [2024-10-01 13:43:58.907844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.375 [2024-10-01 13:43:58.907888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.375 [2024-10-01 13:43:58.907909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.375 [2024-10-01 13:43:58.907960] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.375 [2024-10-01 13:43:58.908002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.375 [2024-10-01 13:43:58.908022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.375 [2024-10-01 13:43:58.908057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.375 [2024-10-01 13:43:58.908081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.375 [2024-10-01 13:43:58.908108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.375 [2024-10-01 13:43:58.908126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.375 [2024-10-01 13:43:58.908140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.375 [2024-10-01 13:43:58.908157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.375 [2024-10-01 13:43:58.908177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.375 [2024-10-01 13:43:58.908190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.375 [2024-10-01 13:43:58.908222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.375 [2024-10-01 13:43:58.908242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.375 [2024-10-01 13:43:58.919681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.375 [2024-10-01 13:43:58.919783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.375 [2024-10-01 13:43:58.921955] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.375 [2024-10-01 13:43:58.922031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.375 [2024-10-01 13:43:58.922075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.375 [2024-10-01 13:43:58.922177] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.375 [2024-10-01 13:43:58.922223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.375 [2024-10-01 13:43:58.922258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.375 [2024-10-01 13:43:58.923473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.375 [2024-10-01 13:43:58.923570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.375 [2024-10-01 13:43:58.925417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.375 [2024-10-01 13:43:58.925467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.375 [2024-10-01 13:43:58.925489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.375 [2024-10-01 13:43:58.925515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.375 [2024-10-01 13:43:58.925531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.375 [2024-10-01 13:43:58.925572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.375 [2024-10-01 13:43:58.926435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.375 [2024-10-01 13:43:58.926477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.375 [2024-10-01 13:43:58.929892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.375 [2024-10-01 13:43:58.929973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.375 [2024-10-01 13:43:58.930064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.375 [2024-10-01 13:43:58.930095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.375 [2024-10-01 13:43:58.930113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.375 [2024-10-01 13:43:58.930182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.375 [2024-10-01 13:43:58.930210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.375 [2024-10-01 13:43:58.930227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.375 [2024-10-01 13:43:58.930246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.375 [2024-10-01 13:43:58.930279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.375 [2024-10-01 13:43:58.930301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.375 [2024-10-01 13:43:58.930315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.375 [2024-10-01 13:43:58.930330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.375 [2024-10-01 13:43:58.930362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.375 [2024-10-01 13:43:58.930382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.375 [2024-10-01 13:43:58.930396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.375 [2024-10-01 13:43:58.930410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.375 [2024-10-01 13:43:58.930440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.375 [2024-10-01 13:43:58.940010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.375 [2024-10-01 13:43:58.940146] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.375 [2024-10-01 13:43:58.940181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.375 [2024-10-01 13:43:58.940199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.375 [2024-10-01 13:43:58.940235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.375 [2024-10-01 13:43:58.940270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.375 [2024-10-01 13:43:58.940342] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.375 [2024-10-01 13:43:58.940371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.375 [2024-10-01 13:43:58.940388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.375 [2024-10-01 13:43:58.940404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.375 [2024-10-01 13:43:58.940418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.375 [2024-10-01 13:43:58.940433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.375 [2024-10-01 13:43:58.941252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.375 [2024-10-01 13:43:58.941313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.375 [2024-10-01 13:43:58.941512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.375 [2024-10-01 13:43:58.941560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.375 [2024-10-01 13:43:58.941578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.375 [2024-10-01 13:43:58.942580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.375 [2024-10-01 13:43:58.950174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.375 [2024-10-01 13:43:58.950294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.375 [2024-10-01 13:43:58.950327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.375 [2024-10-01 13:43:58.950346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.375 [2024-10-01 13:43:58.950379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.375 [2024-10-01 13:43:58.950427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.375 [2024-10-01 13:43:58.950449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.375 [2024-10-01 13:43:58.950463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.375 [2024-10-01 13:43:58.950497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.375 [2024-10-01 13:43:58.950522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.375 [2024-10-01 13:43:58.950620] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.375 [2024-10-01 13:43:58.950650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.375 [2024-10-01 13:43:58.950668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.375 [2024-10-01 13:43:58.950701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.375 [2024-10-01 13:43:58.950733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.375 [2024-10-01 13:43:58.950751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.375 [2024-10-01 13:43:58.950765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.375 [2024-10-01 13:43:58.950797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.375 [2024-10-01 13:43:58.960383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.375 [2024-10-01 13:43:58.960504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.375 [2024-10-01 13:43:58.960551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.375 [2024-10-01 13:43:58.960573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.375 [2024-10-01 13:43:58.960608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.375 [2024-10-01 13:43:58.961531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.375 [2024-10-01 13:43:58.961583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.375 [2024-10-01 13:43:58.961621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.375 [2024-10-01 13:43:58.961841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.376 [2024-10-01 13:43:58.961875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.376 [2024-10-01 13:43:58.962002] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.376 [2024-10-01 13:43:58.962034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.376 [2024-10-01 13:43:58.962053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.376 [2024-10-01 13:43:58.963285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.376 [2024-10-01 13:43:58.964194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.376 [2024-10-01 13:43:58.964235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.376 [2024-10-01 13:43:58.964253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.376 [2024-10-01 13:43:58.964479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.376 [2024-10-01 13:43:58.971234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.376 [2024-10-01 13:43:58.971354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.376 [2024-10-01 13:43:58.971386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.376 [2024-10-01 13:43:58.971404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.376 [2024-10-01 13:43:58.971437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.376 [2024-10-01 13:43:58.971469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.376 [2024-10-01 13:43:58.971486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.376 [2024-10-01 13:43:58.971501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.376 [2024-10-01 13:43:58.971532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.376 [2024-10-01 13:43:58.972192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.376 [2024-10-01 13:43:58.972311] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.376 [2024-10-01 13:43:58.972345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.376 [2024-10-01 13:43:58.972364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.376 [2024-10-01 13:43:58.972397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.376 [2024-10-01 13:43:58.972434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.376 [2024-10-01 13:43:58.972451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.376 [2024-10-01 13:43:58.972466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.376 [2024-10-01 13:43:58.972498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.376 [2024-10-01 13:43:58.981378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.376 [2024-10-01 13:43:58.981495] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.376 [2024-10-01 13:43:58.981558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.376 [2024-10-01 13:43:58.981582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.376 [2024-10-01 13:43:58.981617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.376 [2024-10-01 13:43:58.981650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.376 [2024-10-01 13:43:58.981667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.376 [2024-10-01 13:43:58.981681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.376 [2024-10-01 13:43:58.981713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.376 [2024-10-01 13:43:58.982282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.376 [2024-10-01 13:43:58.982385] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.376 [2024-10-01 13:43:58.982417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.376 [2024-10-01 13:43:58.982434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.376 [2024-10-01 13:43:58.982467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.376 [2024-10-01 13:43:58.982500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.376 [2024-10-01 13:43:58.982518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.376 [2024-10-01 13:43:58.982532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.376 [2024-10-01 13:43:58.982581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.376 [2024-10-01 13:43:58.992611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.376 [2024-10-01 13:43:58.992689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.376 [2024-10-01 13:43:58.992773] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.376 [2024-10-01 13:43:58.992803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.376 [2024-10-01 13:43:58.992821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.376 [2024-10-01 13:43:58.992890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.376 [2024-10-01 13:43:58.992918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.376 [2024-10-01 13:43:58.992935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.376 [2024-10-01 13:43:58.992954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.376 [2024-10-01 13:43:58.992987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.376 [2024-10-01 13:43:58.993008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.376 [2024-10-01 13:43:58.993022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.376 [2024-10-01 13:43:58.993037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.376 [2024-10-01 13:43:58.993069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.376 [2024-10-01 13:43:58.993103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.376 [2024-10-01 13:43:58.993120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.376 [2024-10-01 13:43:58.993135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.376 [2024-10-01 13:43:58.993166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.376 [2024-10-01 13:43:59.003032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.376 [2024-10-01 13:43:59.003090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.376 [2024-10-01 13:43:59.003193] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.376 [2024-10-01 13:43:59.003225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.376 [2024-10-01 13:43:59.003243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.376 [2024-10-01 13:43:59.003294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.376 [2024-10-01 13:43:59.003319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.376 [2024-10-01 13:43:59.003336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.376 [2024-10-01 13:43:59.003369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.376 [2024-10-01 13:43:59.003393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.376 [2024-10-01 13:43:59.003420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.376 [2024-10-01 13:43:59.003439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.376 [2024-10-01 13:43:59.003454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.376 [2024-10-01 13:43:59.003471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.376 [2024-10-01 13:43:59.003487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.376 [2024-10-01 13:43:59.003501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.376 [2024-10-01 13:43:59.004442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.376 [2024-10-01 13:43:59.004483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.376 [2024-10-01 13:43:59.013251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.376 [2024-10-01 13:43:59.013306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.376 [2024-10-01 13:43:59.013426] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.376 [2024-10-01 13:43:59.013459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.376 [2024-10-01 13:43:59.013478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.376 [2024-10-01 13:43:59.013529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.376 [2024-10-01 13:43:59.013572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.376 [2024-10-01 13:43:59.013590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.376 [2024-10-01 13:43:59.014531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.376 [2024-10-01 13:43:59.014591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.376 [2024-10-01 13:43:59.015198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.376 [2024-10-01 13:43:59.015237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.376 [2024-10-01 13:43:59.015256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.376 [2024-10-01 13:43:59.015274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.376 [2024-10-01 13:43:59.015290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.376 [2024-10-01 13:43:59.015304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.376 [2024-10-01 13:43:59.015393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.376 [2024-10-01 13:43:59.015419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.376 [2024-10-01 13:43:59.023390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.376 [2024-10-01 13:43:59.023465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.376 [2024-10-01 13:43:59.023566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.376 [2024-10-01 13:43:59.023599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.376 [2024-10-01 13:43:59.023617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.376 [2024-10-01 13:43:59.024883] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.376 [2024-10-01 13:43:59.024928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.376 [2024-10-01 13:43:59.024949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.376 [2024-10-01 13:43:59.024969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.376 [2024-10-01 13:43:59.025825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.376 [2024-10-01 13:43:59.025867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.376 [2024-10-01 13:43:59.025886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.376 [2024-10-01 13:43:59.025900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.376 [2024-10-01 13:43:59.026014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.376 [2024-10-01 13:43:59.026041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.376 [2024-10-01 13:43:59.026057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.376 [2024-10-01 13:43:59.026072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.376 [2024-10-01 13:43:59.026105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.376 [2024-10-01 13:43:59.033489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.376 [2024-10-01 13:43:59.034574] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.376 [2024-10-01 13:43:59.034622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.376 [2024-10-01 13:43:59.034668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.376 [2024-10-01 13:43:59.034893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.376 [2024-10-01 13:43:59.036238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.376 [2024-10-01 13:43:59.036293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.376 [2024-10-01 13:43:59.036315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.377 [2024-10-01 13:43:59.036330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.377 [2024-10-01 13:43:59.037206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.377 [2024-10-01 13:43:59.037293] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.377 [2024-10-01 13:43:59.037323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.377 [2024-10-01 13:43:59.037341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.377 [2024-10-01 13:43:59.037576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.377 [2024-10-01 13:43:59.037626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.377 [2024-10-01 13:43:59.037646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.377 [2024-10-01 13:43:59.037660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.377 [2024-10-01 13:43:59.037693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.377 [2024-10-01 13:43:59.044438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.377 [2024-10-01 13:43:59.044573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.377 [2024-10-01 13:43:59.044607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.377 [2024-10-01 13:43:59.044625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.377 [2024-10-01 13:43:59.044661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.377 [2024-10-01 13:43:59.044694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.377 [2024-10-01 13:43:59.044712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.377 [2024-10-01 13:43:59.044726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.377 [2024-10-01 13:43:59.044758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.377 [2024-10-01 13:43:59.046333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.377 [2024-10-01 13:43:59.046445] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.377 [2024-10-01 13:43:59.046477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.377 [2024-10-01 13:43:59.046495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.377 [2024-10-01 13:43:59.046528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.377 [2024-10-01 13:43:59.046578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.377 [2024-10-01 13:43:59.046613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.377 [2024-10-01 13:43:59.046628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.377 [2024-10-01 13:43:59.046661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.377 [2024-10-01 13:43:59.055193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.377 [2024-10-01 13:43:59.055314] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.377 [2024-10-01 13:43:59.055348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.377 [2024-10-01 13:43:59.055367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.377 [2024-10-01 13:43:59.055400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.377 [2024-10-01 13:43:59.055432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.377 [2024-10-01 13:43:59.055450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.377 [2024-10-01 13:43:59.055464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.377 [2024-10-01 13:43:59.055496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.377 [2024-10-01 13:43:59.056420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.377 [2024-10-01 13:43:59.056515] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.377 [2024-10-01 13:43:59.056559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.377 [2024-10-01 13:43:59.056580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.377 [2024-10-01 13:43:59.056614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.377 [2024-10-01 13:43:59.057413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.377 [2024-10-01 13:43:59.057453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.377 [2024-10-01 13:43:59.057471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.377 [2024-10-01 13:43:59.057676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.377 [2024-10-01 13:43:59.066462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.377 [2024-10-01 13:43:59.066618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.377 [2024-10-01 13:43:59.066663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.377 [2024-10-01 13:43:59.066685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.377 [2024-10-01 13:43:59.066722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.377 [2024-10-01 13:43:59.066758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.377 [2024-10-01 13:43:59.066832] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.377 [2024-10-01 13:43:59.066861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.377 [2024-10-01 13:43:59.066878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.377 [2024-10-01 13:43:59.066913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.377 [2024-10-01 13:43:59.066930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.377 [2024-10-01 13:43:59.066944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.377 [2024-10-01 13:43:59.066979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.377 [2024-10-01 13:43:59.067002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.377 [2024-10-01 13:43:59.067032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.377 [2024-10-01 13:43:59.067050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.377 [2024-10-01 13:43:59.067064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.377 [2024-10-01 13:43:59.067094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.377 [2024-10-01 13:43:59.076701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.377 [2024-10-01 13:43:59.076819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.377 [2024-10-01 13:43:59.076858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.377 [2024-10-01 13:43:59.076877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.377 [2024-10-01 13:43:59.076941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.377 [2024-10-01 13:43:59.077883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.377 [2024-10-01 13:43:59.077936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.377 [2024-10-01 13:43:59.077957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.377 [2024-10-01 13:43:59.077971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.377 [2024-10-01 13:43:59.078172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.377 [2024-10-01 13:43:59.078254] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.377 [2024-10-01 13:43:59.078284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.377 [2024-10-01 13:43:59.078302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.377 [2024-10-01 13:43:59.079592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.377 [2024-10-01 13:43:59.080520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.377 [2024-10-01 13:43:59.080576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.377 [2024-10-01 13:43:59.080596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.377 [2024-10-01 13:43:59.080771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.377 [2024-10-01 13:43:59.087635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.377 [2024-10-01 13:43:59.087752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.377 [2024-10-01 13:43:59.087786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.377 [2024-10-01 13:43:59.087805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.377 [2024-10-01 13:43:59.087856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.377 [2024-10-01 13:43:59.087903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.377 [2024-10-01 13:43:59.087923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.377 [2024-10-01 13:43:59.087937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.377 [2024-10-01 13:43:59.087969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.377 [2024-10-01 13:43:59.088027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.377 [2024-10-01 13:43:59.088118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.377 [2024-10-01 13:43:59.088147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.377 [2024-10-01 13:43:59.088165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.377 [2024-10-01 13:43:59.088432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.377 [2024-10-01 13:43:59.088614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.377 [2024-10-01 13:43:59.088650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.377 [2024-10-01 13:43:59.088667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.377 [2024-10-01 13:43:59.088779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.377 [2024-10-01 13:43:59.097823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.377 [2024-10-01 13:43:59.097959] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.377 [2024-10-01 13:43:59.098008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.377 [2024-10-01 13:43:59.098029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.377 [2024-10-01 13:43:59.098063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.377 [2024-10-01 13:43:59.098099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.377 [2024-10-01 13:43:59.098119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.377 [2024-10-01 13:43:59.098134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.377 [2024-10-01 13:43:59.098176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.377 [2024-10-01 13:43:59.098214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.377 [2024-10-01 13:43:59.098298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.377 [2024-10-01 13:43:59.098327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.377 [2024-10-01 13:43:59.098344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.377 [2024-10-01 13:43:59.098376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.377 [2024-10-01 13:43:59.098408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.377 [2024-10-01 13:43:59.098426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.377 [2024-10-01 13:43:59.098456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.377 [2024-10-01 13:43:59.098490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.377 [2024-10-01 13:43:59.109331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.378 [2024-10-01 13:43:59.109384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.378 [2024-10-01 13:43:59.109494] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.378 [2024-10-01 13:43:59.109526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.378 [2024-10-01 13:43:59.109561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.378 [2024-10-01 13:43:59.109615] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.378 [2024-10-01 13:43:59.109641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.378 [2024-10-01 13:43:59.109658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.378 [2024-10-01 13:43:59.109692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.378 [2024-10-01 13:43:59.109716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.378 [2024-10-01 13:43:59.109743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.378 [2024-10-01 13:43:59.109761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.378 [2024-10-01 13:43:59.109776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.378 [2024-10-01 13:43:59.109793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.378 [2024-10-01 13:43:59.109809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.378 [2024-10-01 13:43:59.109823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.378 [2024-10-01 13:43:59.109854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.378 [2024-10-01 13:43:59.109874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.378 [2024-10-01 13:43:59.119704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.378 [2024-10-01 13:43:59.119755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.378 [2024-10-01 13:43:59.119854] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.378 [2024-10-01 13:43:59.119897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.378 [2024-10-01 13:43:59.119917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.378 [2024-10-01 13:43:59.119968] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.378 [2024-10-01 13:43:59.119993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.378 [2024-10-01 13:43:59.120010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.378 [2024-10-01 13:43:59.120957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.378 [2024-10-01 13:43:59.121004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.378 [2024-10-01 13:43:59.121229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.378 [2024-10-01 13:43:59.121267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.378 [2024-10-01 13:43:59.121285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.378 [2024-10-01 13:43:59.121303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.378 [2024-10-01 13:43:59.121319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.378 [2024-10-01 13:43:59.121333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.378 [2024-10-01 13:43:59.121411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.378 [2024-10-01 13:43:59.121434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.378 [2024-10-01 13:43:59.130743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.378 [2024-10-01 13:43:59.130796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.378 [2024-10-01 13:43:59.130899] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.378 [2024-10-01 13:43:59.130931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.378 [2024-10-01 13:43:59.130950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.378 [2024-10-01 13:43:59.131000] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.378 [2024-10-01 13:43:59.131026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.378 [2024-10-01 13:43:59.131043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.378 [2024-10-01 13:43:59.131076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.378 [2024-10-01 13:43:59.131099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.378 [2024-10-01 13:43:59.131126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.378 [2024-10-01 13:43:59.131144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.378 [2024-10-01 13:43:59.131159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.378 [2024-10-01 13:43:59.131175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.378 [2024-10-01 13:43:59.131191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.378 [2024-10-01 13:43:59.131205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.378 [2024-10-01 13:43:59.131466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.378 [2024-10-01 13:43:59.131493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.378 [2024-10-01 13:43:59.140895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.378 [2024-10-01 13:43:59.140971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.378 [2024-10-01 13:43:59.141054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.378 [2024-10-01 13:43:59.141084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.378 [2024-10-01 13:43:59.141102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.378 [2024-10-01 13:43:59.141190] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.378 [2024-10-01 13:43:59.141219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.378 [2024-10-01 13:43:59.141236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.378 [2024-10-01 13:43:59.141256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.378 [2024-10-01 13:43:59.141289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.378 [2024-10-01 13:43:59.141310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.378 [2024-10-01 13:43:59.141325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.378 [2024-10-01 13:43:59.141340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.378 [2024-10-01 13:43:59.141372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.378 [2024-10-01 13:43:59.141393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.378 [2024-10-01 13:43:59.141407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.378 [2024-10-01 13:43:59.141421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.378 [2024-10-01 13:43:59.141451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.378 [2024-10-01 13:43:59.152298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.378 [2024-10-01 13:43:59.152380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.378 [2024-10-01 13:43:59.152516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.378 [2024-10-01 13:43:59.152567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.378 [2024-10-01 13:43:59.152588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.378 [2024-10-01 13:43:59.152643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.378 [2024-10-01 13:43:59.152669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.378 [2024-10-01 13:43:59.152685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.378 [2024-10-01 13:43:59.152722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.378 [2024-10-01 13:43:59.152747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.378 [2024-10-01 13:43:59.152774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.378 [2024-10-01 13:43:59.152793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.378 [2024-10-01 13:43:59.152809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.378 [2024-10-01 13:43:59.152826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.378 [2024-10-01 13:43:59.152841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.378 [2024-10-01 13:43:59.152855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.378 [2024-10-01 13:43:59.152887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.378 [2024-10-01 13:43:59.152927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.378 [2024-10-01 13:43:59.162651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.379 [2024-10-01 13:43:59.162705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.379 [2024-10-01 13:43:59.162807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.379 [2024-10-01 13:43:59.162840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.379 [2024-10-01 13:43:59.162859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.379 [2024-10-01 13:43:59.162911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.379 [2024-10-01 13:43:59.162936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.379 [2024-10-01 13:43:59.162954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.379 [2024-10-01 13:43:59.163896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.379 [2024-10-01 13:43:59.163942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.379 [2024-10-01 13:43:59.164158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.379 [2024-10-01 13:43:59.164195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.379 [2024-10-01 13:43:59.164213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.379 [2024-10-01 13:43:59.164231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.379 [2024-10-01 13:43:59.164247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.379 [2024-10-01 13:43:59.164261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.379 [2024-10-01 13:43:59.165547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.379 [2024-10-01 13:43:59.165585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.379 [2024-10-01 13:43:59.173664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.379 [2024-10-01 13:43:59.173716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.379 [2024-10-01 13:43:59.173816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.379 [2024-10-01 13:43:59.173849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.379 [2024-10-01 13:43:59.173867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.379 [2024-10-01 13:43:59.173918] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.379 [2024-10-01 13:43:59.173943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.379 [2024-10-01 13:43:59.173960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.379 [2024-10-01 13:43:59.173993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.379 [2024-10-01 13:43:59.174016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.379 [2024-10-01 13:43:59.174043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.379 [2024-10-01 13:43:59.174083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.379 [2024-10-01 13:43:59.174100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.379 [2024-10-01 13:43:59.174117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.379 [2024-10-01 13:43:59.174133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.379 [2024-10-01 13:43:59.174147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.379 [2024-10-01 13:43:59.174411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.379 [2024-10-01 13:43:59.174438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.379 [2024-10-01 13:43:59.183824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.379 [2024-10-01 13:43:59.183883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.379 [2024-10-01 13:43:59.183984] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.379 [2024-10-01 13:43:59.184016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.379 [2024-10-01 13:43:59.184035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.379 [2024-10-01 13:43:59.184086] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.379 [2024-10-01 13:43:59.184111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.379 [2024-10-01 13:43:59.184127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.379 [2024-10-01 13:43:59.184161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.379 [2024-10-01 13:43:59.184185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.379 [2024-10-01 13:43:59.184212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.379 [2024-10-01 13:43:59.184230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.379 [2024-10-01 13:43:59.184245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.379 [2024-10-01 13:43:59.184261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.379 [2024-10-01 13:43:59.184277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.379 [2024-10-01 13:43:59.184291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.379 [2024-10-01 13:43:59.184322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.379 [2024-10-01 13:43:59.184342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.379 [2024-10-01 13:43:59.194993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.379 [2024-10-01 13:43:59.195048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.379 [2024-10-01 13:43:59.195150] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.379 [2024-10-01 13:43:59.195183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.379 [2024-10-01 13:43:59.195202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.379 [2024-10-01 13:43:59.195253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.379 [2024-10-01 13:43:59.195294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.379 [2024-10-01 13:43:59.195314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.379 [2024-10-01 13:43:59.195348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.379 [2024-10-01 13:43:59.195373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.379 [2024-10-01 13:43:59.195399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.379 [2024-10-01 13:43:59.195417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.379 [2024-10-01 13:43:59.195432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.379 [2024-10-01 13:43:59.195449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.379 [2024-10-01 13:43:59.195464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.379 [2024-10-01 13:43:59.195478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.379 [2024-10-01 13:43:59.195510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.379 [2024-10-01 13:43:59.195529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.379 [2024-10-01 13:43:59.206615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.379 [2024-10-01 13:43:59.206685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.379 [2024-10-01 13:43:59.208363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.379 [2024-10-01 13:43:59.208417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.379 [2024-10-01 13:43:59.208451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.379 [2024-10-01 13:43:59.208531] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.379 [2024-10-01 13:43:59.208579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.379 [2024-10-01 13:43:59.208597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.379 [2024-10-01 13:43:59.209499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.379 [2024-10-01 13:43:59.209568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.379 [2024-10-01 13:43:59.209799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.379 [2024-10-01 13:43:59.209840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.379 [2024-10-01 13:43:59.209861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.379 [2024-10-01 13:43:59.209880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.379 [2024-10-01 13:43:59.209896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.379 [2024-10-01 13:43:59.209910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.379 [2024-10-01 13:43:59.211245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.379 [2024-10-01 13:43:59.211295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.379 [2024-10-01 13:43:59.216841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.379 [2024-10-01 13:43:59.216995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.379 [2024-10-01 13:43:59.217167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.379 [2024-10-01 13:43:59.217220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.379 [2024-10-01 13:43:59.217256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.379 [2024-10-01 13:43:59.217388] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.379 [2024-10-01 13:43:59.217434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.379 [2024-10-01 13:43:59.217468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.379 [2024-10-01 13:43:59.217507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.379 [2024-10-01 13:43:59.217905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.379 [2024-10-01 13:43:59.217970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.379 [2024-10-01 13:43:59.218001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.379 [2024-10-01 13:43:59.218027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.379 [2024-10-01 13:43:59.218236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.379 [2024-10-01 13:43:59.218293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.379 [2024-10-01 13:43:59.218322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.379 [2024-10-01 13:43:59.218350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.379 [2024-10-01 13:43:59.218521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.379 [2024-10-01 13:43:59.227019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.379 [2024-10-01 13:43:59.227224] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.379 [2024-10-01 13:43:59.227262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.379 [2024-10-01 13:43:59.227282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.379 [2024-10-01 13:43:59.227333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.379 [2024-10-01 13:43:59.227376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.380 [2024-10-01 13:43:59.227411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.380 [2024-10-01 13:43:59.227429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.380 [2024-10-01 13:43:59.227445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.380 [2024-10-01 13:43:59.228710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.380 [2024-10-01 13:43:59.228806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.380 [2024-10-01 13:43:59.228838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.380 [2024-10-01 13:43:59.228857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.380 [2024-10-01 13:43:59.229796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.380 [2024-10-01 13:43:59.229937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.380 [2024-10-01 13:43:59.229965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.380 [2024-10-01 13:43:59.229981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.380 [2024-10-01 13:43:59.230017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.380 [2024-10-01 13:43:59.237171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.380 [2024-10-01 13:43:59.237304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.380 [2024-10-01 13:43:59.237337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.380 [2024-10-01 13:43:59.237356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.380 [2024-10-01 13:43:59.237391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.380 [2024-10-01 13:43:59.237423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.380 [2024-10-01 13:43:59.237440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.380 [2024-10-01 13:43:59.237454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.380 [2024-10-01 13:43:59.237500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.380 [2024-10-01 13:43:59.237554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.380 [2024-10-01 13:43:59.237647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.380 [2024-10-01 13:43:59.237678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.380 [2024-10-01 13:43:59.237696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.380 [2024-10-01 13:43:59.237729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.380 [2024-10-01 13:43:59.237763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.380 [2024-10-01 13:43:59.237781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.380 [2024-10-01 13:43:59.237795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.380 [2024-10-01 13:43:59.239058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.380 [2024-10-01 13:43:59.248067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.380 [2024-10-01 13:43:59.248122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.380 [2024-10-01 13:43:59.248228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.380 [2024-10-01 13:43:59.248262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.380 [2024-10-01 13:43:59.248281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.380 [2024-10-01 13:43:59.248337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.380 [2024-10-01 13:43:59.248363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.380 [2024-10-01 13:43:59.248398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.380 [2024-10-01 13:43:59.248435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.380 [2024-10-01 13:43:59.248459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.380 [2024-10-01 13:43:59.248486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.380 [2024-10-01 13:43:59.248505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.380 [2024-10-01 13:43:59.248519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.380 [2024-10-01 13:43:59.248554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.380 [2024-10-01 13:43:59.248574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.380 [2024-10-01 13:43:59.248589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.380 [2024-10-01 13:43:59.248622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.380 [2024-10-01 13:43:59.248642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.380 [2024-10-01 13:43:59.258211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.380 [2024-10-01 13:43:59.258267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.380 [2024-10-01 13:43:59.258375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.380 [2024-10-01 13:43:59.258408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.380 [2024-10-01 13:43:59.258427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.380 [2024-10-01 13:43:59.258478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.380 [2024-10-01 13:43:59.258503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.380 [2024-10-01 13:43:59.258520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.380 [2024-10-01 13:43:59.259463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.380 [2024-10-01 13:43:59.259513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.380 [2024-10-01 13:43:59.259749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.380 [2024-10-01 13:43:59.259780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.380 [2024-10-01 13:43:59.259797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.380 [2024-10-01 13:43:59.259816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.380 [2024-10-01 13:43:59.259831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.380 [2024-10-01 13:43:59.259845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.380 [2024-10-01 13:43:59.259954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.380 [2024-10-01 13:43:59.259981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.380 [2024-10-01 13:43:59.268348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.380 [2024-10-01 13:43:59.268445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.380 [2024-10-01 13:43:59.268551] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.380 [2024-10-01 13:43:59.268584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.380 [2024-10-01 13:43:59.268602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.380 [2024-10-01 13:43:59.268673] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.380 [2024-10-01 13:43:59.268701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.380 [2024-10-01 13:43:59.268718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.380 [2024-10-01 13:43:59.268737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.380 [2024-10-01 13:43:59.268770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.380 [2024-10-01 13:43:59.268791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.380 [2024-10-01 13:43:59.268806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.380 [2024-10-01 13:43:59.268821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.380 [2024-10-01 13:43:59.268853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.380 [2024-10-01 13:43:59.268873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.380 [2024-10-01 13:43:59.268888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.380 [2024-10-01 13:43:59.268902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.380 [2024-10-01 13:43:59.268932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.380 [2024-10-01 13:43:59.278464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.380 [2024-10-01 13:43:59.278597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.380 [2024-10-01 13:43:59.278631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.380 [2024-10-01 13:43:59.278650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.380 [2024-10-01 13:43:59.278698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.380 [2024-10-01 13:43:59.278741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.380 [2024-10-01 13:43:59.278773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.380 [2024-10-01 13:43:59.278792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.380 [2024-10-01 13:43:59.278806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.380 [2024-10-01 13:43:59.278836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.380 [2024-10-01 13:43:59.278897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.380 [2024-10-01 13:43:59.278925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.380 [2024-10-01 13:43:59.278943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.380 [2024-10-01 13:43:59.278976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.380 [2024-10-01 13:43:59.279027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.380 [2024-10-01 13:43:59.279047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.380 [2024-10-01 13:43:59.279062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.380 [2024-10-01 13:43:59.280307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.380 [2024-10-01 13:43:59.288902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.380 [2024-10-01 13:43:59.289005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.380 [2024-10-01 13:43:59.289093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.380 [2024-10-01 13:43:59.289124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.380 [2024-10-01 13:43:59.289142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.380 [2024-10-01 13:43:59.290114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.380 [2024-10-01 13:43:59.290158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.380 [2024-10-01 13:43:59.290179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.380 [2024-10-01 13:43:59.290199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.380 [2024-10-01 13:43:59.290822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.380 [2024-10-01 13:43:59.290865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.380 [2024-10-01 13:43:59.290883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.380 [2024-10-01 13:43:59.290898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.380 [2024-10-01 13:43:59.291000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.380 [2024-10-01 13:43:59.291026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.380 [2024-10-01 13:43:59.291042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.380 [2024-10-01 13:43:59.291059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.380 [2024-10-01 13:43:59.291091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.380 [2024-10-01 13:43:59.299184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.380 [2024-10-01 13:43:59.299259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.380 [2024-10-01 13:43:59.299342] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.380 [2024-10-01 13:43:59.299372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.380 [2024-10-01 13:43:59.299390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.380 [2024-10-01 13:43:59.299457] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.380 [2024-10-01 13:43:59.299485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.380 [2024-10-01 13:43:59.299502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.380 [2024-10-01 13:43:59.299552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.380 [2024-10-01 13:43:59.300324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.380 [2024-10-01 13:43:59.300366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.380 [2024-10-01 13:43:59.300385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.380 [2024-10-01 13:43:59.300400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.380 [2024-10-01 13:43:59.300592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.381 [2024-10-01 13:43:59.300620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.381 [2024-10-01 13:43:59.300635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.381 [2024-10-01 13:43:59.300650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.381 [2024-10-01 13:43:59.300691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.381 [2024-10-01 13:43:59.309386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.381 [2024-10-01 13:43:59.309463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.381 [2024-10-01 13:43:59.309562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.381 [2024-10-01 13:43:59.309595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.381 [2024-10-01 13:43:59.309613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.381 [2024-10-01 13:43:59.309683] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.381 [2024-10-01 13:43:59.309711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.381 [2024-10-01 13:43:59.309731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.381 [2024-10-01 13:43:59.309750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.381 [2024-10-01 13:43:59.309782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.381 [2024-10-01 13:43:59.309804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.381 [2024-10-01 13:43:59.309818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.381 [2024-10-01 13:43:59.309832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.381 [2024-10-01 13:43:59.309864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.381 [2024-10-01 13:43:59.309884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.381 [2024-10-01 13:43:59.309898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.381 [2024-10-01 13:43:59.309912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.381 [2024-10-01 13:43:59.309941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.381 [2024-10-01 13:43:59.319488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.381 [2024-10-01 13:43:59.319618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.381 [2024-10-01 13:43:59.319651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.381 [2024-10-01 13:43:59.319688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.381 [2024-10-01 13:43:59.319738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.381 [2024-10-01 13:43:59.319780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.381 [2024-10-01 13:43:59.319812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.381 [2024-10-01 13:43:59.319830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.381 [2024-10-01 13:43:59.319844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.381 [2024-10-01 13:43:59.319886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.381 [2024-10-01 13:43:59.319952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.381 [2024-10-01 13:43:59.319981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.381 [2024-10-01 13:43:59.319998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.381 [2024-10-01 13:43:59.320032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.381 [2024-10-01 13:43:59.320065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.381 [2024-10-01 13:43:59.320083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.381 [2024-10-01 13:43:59.320097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.381 [2024-10-01 13:43:59.320128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.381 [2024-10-01 13:43:59.329599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.381 [2024-10-01 13:43:59.329719] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.381 [2024-10-01 13:43:59.329753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.381 [2024-10-01 13:43:59.329772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.381 [2024-10-01 13:43:59.329805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.381 [2024-10-01 13:43:59.329838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.381 [2024-10-01 13:43:59.329855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.381 [2024-10-01 13:43:59.329869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.381 [2024-10-01 13:43:59.329911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.381 [2024-10-01 13:43:59.329953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.381 [2024-10-01 13:43:59.330038] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.381 [2024-10-01 13:43:59.330068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.381 [2024-10-01 13:43:59.330086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.381 [2024-10-01 13:43:59.330118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.381 [2024-10-01 13:43:59.330150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.381 [2024-10-01 13:43:59.330183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.381 [2024-10-01 13:43:59.330199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.381 [2024-10-01 13:43:59.330232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.381 [2024-10-01 13:43:59.339694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.381 [2024-10-01 13:43:59.339814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.381 [2024-10-01 13:43:59.339847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.381 [2024-10-01 13:43:59.339866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.381 [2024-10-01 13:43:59.340649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.381 [2024-10-01 13:43:59.340866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.381 [2024-10-01 13:43:59.340913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.381 [2024-10-01 13:43:59.340931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.381 [2024-10-01 13:43:59.340976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.381 [2024-10-01 13:43:59.341004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.381 [2024-10-01 13:43:59.341089] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.381 [2024-10-01 13:43:59.341122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.381 [2024-10-01 13:43:59.341145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.381 [2024-10-01 13:43:59.341178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.381 [2024-10-01 13:43:59.341210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.381 [2024-10-01 13:43:59.341228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.381 [2024-10-01 13:43:59.341242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.381 [2024-10-01 13:43:59.341273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.381 [2024-10-01 13:43:59.351659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.381 [2024-10-01 13:43:59.352375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.381 [2024-10-01 13:43:59.352422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.381 [2024-10-01 13:43:59.352444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.381 [2024-10-01 13:43:59.352565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.381 [2024-10-01 13:43:59.352609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.381 [2024-10-01 13:43:59.352688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.381 [2024-10-01 13:43:59.352718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.381 [2024-10-01 13:43:59.352736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.381 [2024-10-01 13:43:59.352771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.381 [2024-10-01 13:43:59.352787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.381 [2024-10-01 13:43:59.352801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.381 [2024-10-01 13:43:59.353067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.381 [2024-10-01 13:43:59.353098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.381 [2024-10-01 13:43:59.353243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.381 [2024-10-01 13:43:59.353269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.381 [2024-10-01 13:43:59.353284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.381 [2024-10-01 13:43:59.353396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.382 [2024-10-01 13:43:59.362447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.382 [2024-10-01 13:43:59.362588] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.382 [2024-10-01 13:43:59.362623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.382 [2024-10-01 13:43:59.362641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.382 [2024-10-01 13:43:59.362676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.382 [2024-10-01 13:43:59.362723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.382 [2024-10-01 13:43:59.362746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.382 [2024-10-01 13:43:59.362761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.382 [2024-10-01 13:43:59.362794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.382 [2024-10-01 13:43:59.362819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.382 [2024-10-01 13:43:59.362895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.382 [2024-10-01 13:43:59.362925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.382 [2024-10-01 13:43:59.362943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.382 [2024-10-01 13:43:59.362975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.382 [2024-10-01 13:43:59.363007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.382 [2024-10-01 13:43:59.363025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.382 [2024-10-01 13:43:59.363039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.382 [2024-10-01 13:43:59.363070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.382 [2024-10-01 13:43:59.373648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.382 [2024-10-01 13:43:59.373736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.382 [2024-10-01 13:43:59.373821] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.382 [2024-10-01 13:43:59.373851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.382 [2024-10-01 13:43:59.373887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.382 [2024-10-01 13:43:59.373961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.382 [2024-10-01 13:43:59.373990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.382 [2024-10-01 13:43:59.374007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.382 [2024-10-01 13:43:59.374025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.382 [2024-10-01 13:43:59.374059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.382 [2024-10-01 13:43:59.374080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.382 [2024-10-01 13:43:59.374094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.382 [2024-10-01 13:43:59.374108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.382 [2024-10-01 13:43:59.374140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.382 [2024-10-01 13:43:59.374160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.382 [2024-10-01 13:43:59.374174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.382 [2024-10-01 13:43:59.374188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.382 [2024-10-01 13:43:59.374218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.382 [2024-10-01 13:43:59.383821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.382 [2024-10-01 13:43:59.383906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.382 [2024-10-01 13:43:59.383989] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.382 [2024-10-01 13:43:59.384020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.382 [2024-10-01 13:43:59.384039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.382 [2024-10-01 13:43:59.385008] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.382 [2024-10-01 13:43:59.385053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.382 [2024-10-01 13:43:59.385074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.382 [2024-10-01 13:43:59.385093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.382 [2024-10-01 13:43:59.385301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.382 [2024-10-01 13:43:59.385331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.382 [2024-10-01 13:43:59.385347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.382 [2024-10-01 13:43:59.385361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.382 [2024-10-01 13:43:59.385439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.382 [2024-10-01 13:43:59.385461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.382 [2024-10-01 13:43:59.385476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.382 [2024-10-01 13:43:59.385508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.382 [2024-10-01 13:43:59.386751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.382 [2024-10-01 13:43:59.394697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.382 [2024-10-01 13:43:59.394748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.382 [2024-10-01 13:43:59.394847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.382 [2024-10-01 13:43:59.394879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.382 [2024-10-01 13:43:59.394896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.382 [2024-10-01 13:43:59.394946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.382 [2024-10-01 13:43:59.394972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.382 [2024-10-01 13:43:59.394989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.382 [2024-10-01 13:43:59.395022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.382 [2024-10-01 13:43:59.395046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.382 [2024-10-01 13:43:59.395072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.382 [2024-10-01 13:43:59.395090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.382 [2024-10-01 13:43:59.395104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.382 [2024-10-01 13:43:59.395122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.382 [2024-10-01 13:43:59.395137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.382 [2024-10-01 13:43:59.395150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.382 [2024-10-01 13:43:59.395412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.382 [2024-10-01 13:43:59.395439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.382 [2024-10-01 13:43:59.404875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.382 [2024-10-01 13:43:59.404925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.382 [2024-10-01 13:43:59.405023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.382 [2024-10-01 13:43:59.405055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.382 [2024-10-01 13:43:59.405073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.382 [2024-10-01 13:43:59.405123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.382 [2024-10-01 13:43:59.405148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.382 [2024-10-01 13:43:59.405164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.382 [2024-10-01 13:43:59.405198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.382 [2024-10-01 13:43:59.405221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.382 [2024-10-01 13:43:59.405270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.382 [2024-10-01 13:43:59.405290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.382 [2024-10-01 13:43:59.405304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.382 [2024-10-01 13:43:59.405321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.382 [2024-10-01 13:43:59.405337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.382 [2024-10-01 13:43:59.405350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.382 [2024-10-01 13:43:59.405382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.382 [2024-10-01 13:43:59.405402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.382 [2024-10-01 13:43:59.416099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.382 [2024-10-01 13:43:59.416151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.382 [2024-10-01 13:43:59.416250] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.382 [2024-10-01 13:43:59.416283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.382 [2024-10-01 13:43:59.416320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.382 [2024-10-01 13:43:59.416386] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.382 [2024-10-01 13:43:59.416413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.382 [2024-10-01 13:43:59.416430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.382 [2024-10-01 13:43:59.416475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.382 [2024-10-01 13:43:59.416500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.382 [2024-10-01 13:43:59.416529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.382 [2024-10-01 13:43:59.416564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.382 [2024-10-01 13:43:59.416579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.382 [2024-10-01 13:43:59.416597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.382 [2024-10-01 13:43:59.416619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.382 [2024-10-01 13:43:59.416654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.382 [2024-10-01 13:43:59.416697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.382 [2024-10-01 13:43:59.416720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.382 [2024-10-01 13:43:59.426233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.382 [2024-10-01 13:43:59.426318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.382 [2024-10-01 13:43:59.426404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.382 [2024-10-01 13:43:59.426434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.382 [2024-10-01 13:43:59.426453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.382 [2024-10-01 13:43:59.426576] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.382 [2024-10-01 13:43:59.426609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.382 [2024-10-01 13:43:59.426627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.382 [2024-10-01 13:43:59.426647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.382 [2024-10-01 13:43:59.426683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.382 [2024-10-01 13:43:59.426704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.382 [2024-10-01 13:43:59.426719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.382 [2024-10-01 13:43:59.426733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.382 [2024-10-01 13:43:59.426766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.382 [2024-10-01 13:43:59.426787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.383 [2024-10-01 13:43:59.426801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.383 [2024-10-01 13:43:59.426815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.383 [2024-10-01 13:43:59.428078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.383 [2024-10-01 13:43:59.436340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.383 [2024-10-01 13:43:59.436460] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.383 [2024-10-01 13:43:59.436492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.383 [2024-10-01 13:43:59.436511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.383 [2024-10-01 13:43:59.437340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.383 [2024-10-01 13:43:59.437570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.383 [2024-10-01 13:43:59.437613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.383 [2024-10-01 13:43:59.437632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.383 [2024-10-01 13:43:59.437647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.383 [2024-10-01 13:43:59.438646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.383 [2024-10-01 13:43:59.438738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.383 [2024-10-01 13:43:59.438768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.383 [2024-10-01 13:43:59.438786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.383 [2024-10-01 13:43:59.439395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.383 [2024-10-01 13:43:59.439508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.383 [2024-10-01 13:43:59.439547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.383 [2024-10-01 13:43:59.439566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.383 [2024-10-01 13:43:59.439618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.383 [2024-10-01 13:43:59.446435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.383 [2024-10-01 13:43:59.446570] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.383 [2024-10-01 13:43:59.446605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.383 [2024-10-01 13:43:59.446624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.383 [2024-10-01 13:43:59.446659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.383 [2024-10-01 13:43:59.446692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.383 [2024-10-01 13:43:59.446711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.383 [2024-10-01 13:43:59.446725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.383 [2024-10-01 13:43:59.446757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.383 [2024-10-01 13:43:59.449258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.383 [2024-10-01 13:43:59.449375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.383 [2024-10-01 13:43:59.449408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.383 [2024-10-01 13:43:59.449427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.383 [2024-10-01 13:43:59.449477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.383 [2024-10-01 13:43:59.449515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.383 [2024-10-01 13:43:59.449548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.383 [2024-10-01 13:43:59.449567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.383 [2024-10-01 13:43:59.449601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.383 [2024-10-01 13:43:59.456548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.383 [2024-10-01 13:43:59.456664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.383 [2024-10-01 13:43:59.456696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.383 [2024-10-01 13:43:59.456715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.383 [2024-10-01 13:43:59.457653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.383 [2024-10-01 13:43:59.457882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.383 [2024-10-01 13:43:59.457931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.383 [2024-10-01 13:43:59.457950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.383 [2024-10-01 13:43:59.458030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.383 [2024-10-01 13:43:59.460413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.383 [2024-10-01 13:43:59.460561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.383 [2024-10-01 13:43:59.460595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.383 [2024-10-01 13:43:59.460629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.383 [2024-10-01 13:43:59.460666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.383 [2024-10-01 13:43:59.460699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.383 [2024-10-01 13:43:59.460718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.383 [2024-10-01 13:43:59.460732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.383 [2024-10-01 13:43:59.460764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.383 [2024-10-01 13:43:59.467323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.383 [2024-10-01 13:43:59.467442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.383 [2024-10-01 13:43:59.467476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.383 [2024-10-01 13:43:59.467495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.383 [2024-10-01 13:43:59.467529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.383 [2024-10-01 13:43:59.467580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.383 [2024-10-01 13:43:59.467600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.383 [2024-10-01 13:43:59.467614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.383 [2024-10-01 13:43:59.467646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.383 [2024-10-01 13:43:59.470609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.383 [2024-10-01 13:43:59.470723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.383 [2024-10-01 13:43:59.470755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.383 [2024-10-01 13:43:59.470774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.383 [2024-10-01 13:43:59.470823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.383 [2024-10-01 13:43:59.471754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.383 [2024-10-01 13:43:59.471794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.383 [2024-10-01 13:43:59.471813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.383 [2024-10-01 13:43:59.472020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.383 [2024-10-01 13:43:59.477488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.383 [2024-10-01 13:43:59.477619] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.383 [2024-10-01 13:43:59.477652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.383 [2024-10-01 13:43:59.477670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.383 [2024-10-01 13:43:59.477703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.383 [2024-10-01 13:43:59.477736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.383 [2024-10-01 13:43:59.477771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.383 [2024-10-01 13:43:59.477787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.383 [2024-10-01 13:43:59.477821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.383 [2024-10-01 13:43:59.481438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.383 [2024-10-01 13:43:59.481568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.383 [2024-10-01 13:43:59.481602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.383 [2024-10-01 13:43:59.481621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.383 [2024-10-01 13:43:59.481655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.383 [2024-10-01 13:43:59.481690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.383 [2024-10-01 13:43:59.481708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.383 [2024-10-01 13:43:59.481723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.383 [2024-10-01 13:43:59.481755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.383 [2024-10-01 13:43:59.488647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.383 [2024-10-01 13:43:59.488764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.383 [2024-10-01 13:43:59.488798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.383 [2024-10-01 13:43:59.488817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.383 [2024-10-01 13:43:59.488850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.383 [2024-10-01 13:43:59.488882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.383 [2024-10-01 13:43:59.488901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.383 [2024-10-01 13:43:59.488915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.383 [2024-10-01 13:43:59.488947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.383 [2024-10-01 13:43:59.491598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.383 [2024-10-01 13:43:59.491720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.383 [2024-10-01 13:43:59.491753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.383 [2024-10-01 13:43:59.491771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.383 [2024-10-01 13:43:59.491804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.383 [2024-10-01 13:43:59.491837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.383 [2024-10-01 13:43:59.491855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.383 [2024-10-01 13:43:59.491869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.383 [2024-10-01 13:43:59.491915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.383 [2024-10-01 13:43:59.498859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.383 [2024-10-01 13:43:59.498978] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.383 [2024-10-01 13:43:59.499011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.383 [2024-10-01 13:43:59.499030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.383 [2024-10-01 13:43:59.499064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.383 [2024-10-01 13:43:59.499097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.383 [2024-10-01 13:43:59.499115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.383 [2024-10-01 13:43:59.499129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.383 [2024-10-01 13:43:59.500068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.383 [2024-10-01 13:43:59.502776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.383 [2024-10-01 13:43:59.502903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.383 [2024-10-01 13:43:59.502942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.383 [2024-10-01 13:43:59.502962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.383 [2024-10-01 13:43:59.502996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.383 [2024-10-01 13:43:59.503029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.383 [2024-10-01 13:43:59.503047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.383 [2024-10-01 13:43:59.503062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.383 [2024-10-01 13:43:59.503094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.383 [2024-10-01 13:43:59.509854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.383 [2024-10-01 13:43:59.509973] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.383 [2024-10-01 13:43:59.510005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.384 [2024-10-01 13:43:59.510024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.510057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.510090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.510108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.510122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.510154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.513099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.513213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.513246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.384 [2024-10-01 13:43:59.513264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.513334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.514268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.514308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.514326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.514552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.520035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.520168] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.520201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.384 [2024-10-01 13:43:59.520219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.520253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.520288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.520305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.520320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.520351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.524037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.524159] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.524192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.384 [2024-10-01 13:43:59.524210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.524244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.524276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.524295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.524309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.524340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.531248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.531378] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.531411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.384 [2024-10-01 13:43:59.531430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.531464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.531496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.531517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.531566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.531603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.534251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.534377] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.534410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.384 [2024-10-01 13:43:59.534429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.534462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.534495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.534513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.534527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.534579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.541514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.541658] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.541705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.384 [2024-10-01 13:43:59.541735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.541776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.542722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.542760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.542779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.543003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.545505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.545643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.545676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.384 [2024-10-01 13:43:59.545694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.545729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.545761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.545780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.545794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.545836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.552515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.552670] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.552702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.384 [2024-10-01 13:43:59.552721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.552765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.552798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.552816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.552830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.552862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.555761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.555886] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.555920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.384 [2024-10-01 13:43:59.555938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.555990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.556925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.556964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.556983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.557185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.562681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.562838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.562872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.384 [2024-10-01 13:43:59.562891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.562926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.562959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.562977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.562992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.563024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.566656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.566774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.566807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.384 [2024-10-01 13:43:59.566826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.566859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.566912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.566932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.566947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.566979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.573916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.574042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.574076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.384 [2024-10-01 13:43:59.574095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.574129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.574162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.574180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.574194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.574226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.576929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.577051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.577084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.384 [2024-10-01 13:43:59.577102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.577137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.577169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.577188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.577202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.577233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.584311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.584441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.584475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.384 [2024-10-01 13:43:59.584494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.585436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.585679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.585718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.585737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.585835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.588232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.588388] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.588425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.384 [2024-10-01 13:43:59.588444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.588479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.588512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.588532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.384 [2024-10-01 13:43:59.588564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.384 [2024-10-01 13:43:59.588599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.384 [2024-10-01 13:43:59.595334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.384 [2024-10-01 13:43:59.595471] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.384 [2024-10-01 13:43:59.595505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.384 [2024-10-01 13:43:59.595524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.384 [2024-10-01 13:43:59.595577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.384 [2024-10-01 13:43:59.595613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.384 [2024-10-01 13:43:59.595631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.595645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.595678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 [2024-10-01 13:43:59.598847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.598968] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.599001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.385 [2024-10-01 13:43:59.599020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.599054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.599086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.385 [2024-10-01 13:43:59.599105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.599119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.599151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 [2024-10-01 13:43:59.606060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.606180] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.606214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.385 [2024-10-01 13:43:59.606251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.606287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.606322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.385 [2024-10-01 13:43:59.606339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.606354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.606395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 [2024-10-01 13:43:59.610163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.610284] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.610318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.385 [2024-10-01 13:43:59.610336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.610369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.610412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.385 [2024-10-01 13:43:59.610431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.610445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.610478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 [2024-10-01 13:43:59.617422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.617564] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.617598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.385 [2024-10-01 13:43:59.617617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.617652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.617686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.385 [2024-10-01 13:43:59.617703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.617718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.617755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 [2024-10-01 13:43:59.620660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.620789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.620824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.385 [2024-10-01 13:43:59.620843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.620877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.620912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.385 [2024-10-01 13:43:59.620949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.620965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.620999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 [2024-10-01 13:43:59.627814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.627966] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.628001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.385 [2024-10-01 13:43:59.628021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.628055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.628986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.385 [2024-10-01 13:43:59.629028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.629047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.629239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 [2024-10-01 13:43:59.631758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.631890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.631925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.385 [2024-10-01 13:43:59.631944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.631993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.632031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.385 [2024-10-01 13:43:59.632061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.632075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.632107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 [2024-10-01 13:43:59.638710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.638830] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.638864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.385 [2024-10-01 13:43:59.638883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.638916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.638949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.385 [2024-10-01 13:43:59.638967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.638981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.639013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 [2024-10-01 13:43:59.641986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.642104] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.642137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.385 [2024-10-01 13:43:59.642155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.642188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.642227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.385 [2024-10-01 13:43:59.642245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.642262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.643197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 [2024-10-01 13:43:59.648893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.649012] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.649045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.385 [2024-10-01 13:43:59.649064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.649097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.649130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.385 [2024-10-01 13:43:59.649148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.649163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.649196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 [2024-10-01 13:43:59.652863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.653000] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.653034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.385 [2024-10-01 13:43:59.653052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.653084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.653117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.385 [2024-10-01 13:43:59.653135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.653150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.653182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 [2024-10-01 13:43:59.660006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.660136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.660169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.385 [2024-10-01 13:43:59.660187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.660237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.660272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.385 [2024-10-01 13:43:59.660290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.660305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.660337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 [2024-10-01 13:43:59.663027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.663142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.663174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.385 [2024-10-01 13:43:59.663193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.663226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.663258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.385 [2024-10-01 13:43:59.663276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.663291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.663323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 8440.57 IOPS, 32.97 MiB/s [2024-10-01 13:43:59.670245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.670362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.670395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.385 [2024-10-01 13:43:59.670414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.670448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.671391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.385 [2024-10-01 13:43:59.671431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.671449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.671660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 [2024-10-01 13:43:59.674317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.674444] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.674479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.385 [2024-10-01 13:43:59.674499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.674552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.674593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.385 [2024-10-01 13:43:59.674612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.385 [2024-10-01 13:43:59.674643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.385 [2024-10-01 13:43:59.674678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.385 [2024-10-01 13:43:59.681078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.385 [2024-10-01 13:43:59.681212] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.385 [2024-10-01 13:43:59.681246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.385 [2024-10-01 13:43:59.681266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.385 [2024-10-01 13:43:59.681300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.385 [2024-10-01 13:43:59.681333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.681351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.681365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.681399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.684420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.684554] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.684588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.386 [2024-10-01 13:43:59.684608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.685531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.685784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.685828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.685847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.685928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.691221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.691343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.691375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.386 [2024-10-01 13:43:59.691394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.691428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.691460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.691478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.691493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.691526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.695140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.695280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.695315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.386 [2024-10-01 13:43:59.695334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.695385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.695423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.695442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.695456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.695488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.702326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.702447] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.702480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.386 [2024-10-01 13:43:59.702499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.702532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.702585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.702603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.702618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.702651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.705313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.705428] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.705460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.386 [2024-10-01 13:43:59.705479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.705512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.705562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.705583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.705598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.705630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.712491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.712621] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.712655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.386 [2024-10-01 13:43:59.712674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.712742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.713680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.713719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.713738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.713940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.716442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.716579] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.716613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.386 [2024-10-01 13:43:59.716631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.716666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.716718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.716741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.716756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.716789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.723371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.723488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.723521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.386 [2024-10-01 13:43:59.723554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.723592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.723625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.723643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.723657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.723689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.726651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.726768] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.726800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.386 [2024-10-01 13:43:59.726819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.726852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.726884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.726902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.726931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.727863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.733506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.733652] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.733686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.386 [2024-10-01 13:43:59.733705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.733738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.733770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.733788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.733802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.733834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.737487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.737621] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.737654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.386 [2024-10-01 13:43:59.737673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.737723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.737761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.737779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.737794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.737826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.744649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.744768] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.744801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.386 [2024-10-01 13:43:59.744820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.744854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.744886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.744903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.744918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.744950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.747648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.747771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.747822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.386 [2024-10-01 13:43:59.747843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.747889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.747925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.747943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.747957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.747988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.754872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.754993] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.755027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.386 [2024-10-01 13:43:59.755046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.755096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.756051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.756091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.756110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.756312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.758784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.758910] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.758942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.386 [2024-10-01 13:43:59.758960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.758995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.759044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.759067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.759082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.759114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.766159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.766278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.766312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.386 [2024-10-01 13:43:59.766330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.766364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.766421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.766440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.766455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.766487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.769769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.769885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.769918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.386 [2024-10-01 13:43:59.769937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.769970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.770003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.770021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.770036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.770977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.777804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.777951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.777985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.386 [2024-10-01 13:43:59.778004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.778038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.778071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.778088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.778102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.778134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.781492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.782196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.782242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.386 [2024-10-01 13:43:59.782264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.386 [2024-10-01 13:43:59.782368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.386 [2024-10-01 13:43:59.782409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.386 [2024-10-01 13:43:59.782428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.386 [2024-10-01 13:43:59.782442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.386 [2024-10-01 13:43:59.782476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.386 [2024-10-01 13:43:59.789470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.386 [2024-10-01 13:43:59.789614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.386 [2024-10-01 13:43:59.789649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.387 [2024-10-01 13:43:59.789667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.789702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.789735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.789753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.789767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.789799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.792488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.792626] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.792659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.387 [2024-10-01 13:43:59.792678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.792711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.792744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.792762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.792776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.792808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.799788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.799919] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.799954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.387 [2024-10-01 13:43:59.799982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.800016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.800049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.800067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.800081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.801007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.803772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.803907] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.803941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.387 [2024-10-01 13:43:59.803977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.804014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.804048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.804066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.804081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.804113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.810733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.810852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.810885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.387 [2024-10-01 13:43:59.810904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.810938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.810971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.810989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.811003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.811035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.813992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.814107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.814140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.387 [2024-10-01 13:43:59.814159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.814207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.815142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.815182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.815201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.815388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.820876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.820995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.821028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.387 [2024-10-01 13:43:59.821047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.821081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.821115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.821150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.821166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.821200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.824981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.825098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.825130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.387 [2024-10-01 13:43:59.825149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.825182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.825215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.825233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.825248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.825279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.832186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.832314] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.832348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.387 [2024-10-01 13:43:59.832367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.832400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.832433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.832451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.832465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.832497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.835151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.835273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.835305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.387 [2024-10-01 13:43:59.835323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.835357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.835389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.835408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.835422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.835453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.842343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.842482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.842516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.387 [2024-10-01 13:43:59.842548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.843467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.843723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.843763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.843782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.843861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.846270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.846393] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.846426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.387 [2024-10-01 13:43:59.846444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.846477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.846510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.846528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.846559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.846594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.853214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.853332] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.853365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.387 [2024-10-01 13:43:59.853383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.853417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.853449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.853467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.853481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.853513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.856486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.856613] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.856646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.387 [2024-10-01 13:43:59.856665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.856732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.857672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.857710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.857730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.857932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.863353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.863479] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.863512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.387 [2024-10-01 13:43:59.863531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.863584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.863620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.863637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.863651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.863683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.867350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.867465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.867498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.387 [2024-10-01 13:43:59.867517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.867567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.867604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.867622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.867637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.867668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.874518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.874676] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.874714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.387 [2024-10-01 13:43:59.874733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.874768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.874802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.874819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.874851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.874888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.387 [2024-10-01 13:43:59.877504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.387 [2024-10-01 13:43:59.877635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.387 [2024-10-01 13:43:59.877668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.387 [2024-10-01 13:43:59.877687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.387 [2024-10-01 13:43:59.877721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.387 [2024-10-01 13:43:59.877753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.387 [2024-10-01 13:43:59.877772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.387 [2024-10-01 13:43:59.877787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.387 [2024-10-01 13:43:59.877819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.884723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.884847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.884882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.388 [2024-10-01 13:43:59.884901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.884936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.885865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.885905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.885924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.886146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.888650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.888768] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.888801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.388 [2024-10-01 13:43:59.888820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.888853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.888886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.888904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.888919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.888951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.895528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.895661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.895711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.388 [2024-10-01 13:43:59.895732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.895767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.895800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.895818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.895832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.895865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.898820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.898939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.898972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.388 [2024-10-01 13:43:59.898991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.899952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.900177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.900215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.900233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.900315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.905638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.905762] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.905796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.388 [2024-10-01 13:43:59.905815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.905848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.905881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.905899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.905913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.905945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.909625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.909744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.909776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.388 [2024-10-01 13:43:59.909795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.909828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.909880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.909900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.909915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.909947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.916798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.916917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.916950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.388 [2024-10-01 13:43:59.916968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.917002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.917035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.917052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.917068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.917100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.919743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.919860] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.919905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.388 [2024-10-01 13:43:59.919924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.919959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.919991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.920009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.920023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.920055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.927075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.927261] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.927298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.388 [2024-10-01 13:43:59.927317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.928280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.928530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.928580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.928599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.928681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.931027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.931152] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.931185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.388 [2024-10-01 13:43:59.931204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.931238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.931270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.931289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.931303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.931335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.937994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.938114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.938146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.388 [2024-10-01 13:43:59.938165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.938199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.938231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.938249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.938263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.938295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.941221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.941336] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.941368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.388 [2024-10-01 13:43:59.941387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.941436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.942367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.942406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.942426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.942629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.948089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.948208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.948241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.388 [2024-10-01 13:43:59.948277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.948313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.948346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.948364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.948379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.948411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.952028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.952145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.952177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.388 [2024-10-01 13:43:59.952195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.952229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.952261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.952279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.952293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.952325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.959189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.959306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.959339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.388 [2024-10-01 13:43:59.959358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.959391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.959423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.959441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.959455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.959487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.962145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.962258] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.962290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.388 [2024-10-01 13:43:59.962308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.962342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.962375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.962410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.962426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.962459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.969285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.969403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.969436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.388 [2024-10-01 13:43:59.969454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.969488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.970413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.970453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.970472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.970699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.973198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.973324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.973357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.388 [2024-10-01 13:43:59.973375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.973408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.973441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.973459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.973474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.973505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.980225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.980419] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.980456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.388 [2024-10-01 13:43:59.980476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.980515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.980564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.388 [2024-10-01 13:43:59.980585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.388 [2024-10-01 13:43:59.980602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.388 [2024-10-01 13:43:59.980635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.388 [2024-10-01 13:43:59.983614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.388 [2024-10-01 13:43:59.983774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.388 [2024-10-01 13:43:59.983807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.388 [2024-10-01 13:43:59.983826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.388 [2024-10-01 13:43:59.984792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.388 [2024-10-01 13:43:59.985009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:43:59.985046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:43:59.985064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:43:59.985145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:43:59.990474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:43:59.990606] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:43:59.990639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.389 [2024-10-01 13:43:59.990659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:43:59.990693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:43:59.990726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:43:59.990744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:43:59.990758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:43:59.990790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:43:59.994476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:43:59.994606] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:43:59.994639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.389 [2024-10-01 13:43:59.994658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:43:59.994691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:43:59.994724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:43:59.994742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:43:59.994757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:43:59.994789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.002018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.002152] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.002187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.389 [2024-10-01 13:44:00.002206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.002262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.002297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.002315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.002330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.002364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.004686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.004832] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.004869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.389 [2024-10-01 13:44:00.004888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.004922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.004956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.004975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.004989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.005022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.013187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.013309] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.013342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.389 [2024-10-01 13:44:00.013361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.013401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.014644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.014687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.014705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.015600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.015890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.016006] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.016039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.389 [2024-10-01 13:44:00.016058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.016092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.016125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.016143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.016173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.016209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.023652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.023850] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.023899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.389 [2024-10-01 13:44:00.023921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.023965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.024000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.024019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.024034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.024067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.026077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.026193] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.026225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.389 [2024-10-01 13:44:00.026244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.026277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.027207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.027246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.027265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.027461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.033748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.033866] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.033898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.389 [2024-10-01 13:44:00.033917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.033950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.033984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.034002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.034017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.034049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.036927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.037045] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.037101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.389 [2024-10-01 13:44:00.037122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.037157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.037190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.037208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.037222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.037254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.044087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.044215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.044249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.389 [2024-10-01 13:44:00.044268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.044302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.044335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.044353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.044367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.044400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.047055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.047186] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.047218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.389 [2024-10-01 13:44:00.047237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.047271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.047304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.047323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.047337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.047369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.054284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.054429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.054463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.389 [2024-10-01 13:44:00.054483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.054518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.055489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.055529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.055561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.055790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.058223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.058341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.058373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.389 [2024-10-01 13:44:00.058391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.058425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.058458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.058476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.058491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.058524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.065194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.065357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.065393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.389 [2024-10-01 13:44:00.065412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.065450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.065483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.065501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.065516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.065565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.068521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.068657] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.068690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.389 [2024-10-01 13:44:00.068708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.069655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.069881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.069918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.069936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.070017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.075342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.075480] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.075523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.389 [2024-10-01 13:44:00.075560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.075597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.075630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.075648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.075663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.075694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.389 [2024-10-01 13:44:00.079366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.389 [2024-10-01 13:44:00.079561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.389 [2024-10-01 13:44:00.079599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.389 [2024-10-01 13:44:00.079619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.389 [2024-10-01 13:44:00.079657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.389 [2024-10-01 13:44:00.079690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.389 [2024-10-01 13:44:00.079709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.389 [2024-10-01 13:44:00.079724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.389 [2024-10-01 13:44:00.079759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.086745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.086948] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.086985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.390 [2024-10-01 13:44:00.087005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.087042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.087076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.087094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.087110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.087143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.089756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.089894] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.089928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.390 [2024-10-01 13:44:00.089972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.090009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.090042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.090061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.090076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.090109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.097035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.097196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.097232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.390 [2024-10-01 13:44:00.097251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.098206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.098453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.098492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.098511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.098609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.100978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.101096] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.101134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.390 [2024-10-01 13:44:00.101155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.101188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.101221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.101239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.101254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.101286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.107899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.108018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.108053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.390 [2024-10-01 13:44:00.108071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.108105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.108138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.108179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.108195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.108229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.111133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.111248] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.111283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.390 [2024-10-01 13:44:00.111302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.111351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.112298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.112339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.112358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.112573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.118106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.118307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.118372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.390 [2024-10-01 13:44:00.118411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.118469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.118523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.118570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.118588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.118625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.121986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.122111] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.122157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.390 [2024-10-01 13:44:00.122179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.122213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.122246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.122265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.122280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.122312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.129135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.129277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.129324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.390 [2024-10-01 13:44:00.129345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.129380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.129413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.129432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.129447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.129479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.132111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.132236] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.132279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.390 [2024-10-01 13:44:00.132300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.132334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.132367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.132385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.132400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.132432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.139292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.139411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.139444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.390 [2024-10-01 13:44:00.139463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.139497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.139530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.139564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.139580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.140504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.143240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.143357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.143400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.390 [2024-10-01 13:44:00.143421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.143475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.143509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.143527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.143557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.143592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.150088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.150218] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.150251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.390 [2024-10-01 13:44:00.150269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.150303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.150335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.150354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.150368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.150400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.153331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.153447] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.153489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.390 [2024-10-01 13:44:00.153508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.153574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.154493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.154546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.154568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.154757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.160188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.160304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.160337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.390 [2024-10-01 13:44:00.160356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.160390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.160423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.160441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.160474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.160510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.164204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.164323] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.164356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.390 [2024-10-01 13:44:00.164375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.164408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.164440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.164458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.164473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.164505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.171349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.171476] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.171509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.390 [2024-10-01 13:44:00.171528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.171579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.171613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.171632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.171647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.171679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.174361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.174475] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.174508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.390 [2024-10-01 13:44:00.174527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.174578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.174613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.174631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.174646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.174678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.181580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.181698] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.181750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.390 [2024-10-01 13:44:00.181771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.182702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.390 [2024-10-01 13:44:00.182933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.390 [2024-10-01 13:44:00.182971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.390 [2024-10-01 13:44:00.182990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.390 [2024-10-01 13:44:00.183069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.390 [2024-10-01 13:44:00.185481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.390 [2024-10-01 13:44:00.185621] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.390 [2024-10-01 13:44:00.185655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.390 [2024-10-01 13:44:00.185673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.390 [2024-10-01 13:44:00.185706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.185739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.185757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.185771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.185803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.192457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.192587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.192621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.391 [2024-10-01 13:44:00.192639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.192674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.192707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.192725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.192739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.192772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.195722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.195835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.195867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.391 [2024-10-01 13:44:00.195898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.195948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.196900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.196940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.196959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.197150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.202585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.202701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.202735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.391 [2024-10-01 13:44:00.202753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.202787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.202819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.202838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.202852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.202883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.206760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.206876] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.206909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.391 [2024-10-01 13:44:00.206928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.206961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.206994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.207013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.207027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.207059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.213230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.213348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.213380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.391 [2024-10-01 13:44:00.213399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.213432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.213465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.213483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.213497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.213566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.216856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.216972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.217004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.391 [2024-10-01 13:44:00.217023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.217056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.217089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.217107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.217122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.217154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.224089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.224208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.224241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.391 [2024-10-01 13:44:00.224260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.224293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.224343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.224366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.224381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.224413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.227528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.227656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.227689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.391 [2024-10-01 13:44:00.227707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.227740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.227772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.227791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.227806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.228744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.234485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.234618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.234652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.391 [2024-10-01 13:44:00.234690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.234726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.234759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.234777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.234792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.234824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.238616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.238734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.238766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.391 [2024-10-01 13:44:00.238785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.238818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.238850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.238868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.238883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.238915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.245613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.245875] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.245922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.391 [2024-10-01 13:44:00.245941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.245983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.246036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.246059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.246074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.246107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.248801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.248916] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.248949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.391 [2024-10-01 13:44:00.248967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.248999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.249031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.249065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.249081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.249114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.256071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.256189] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.256224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.391 [2024-10-01 13:44:00.256243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.256276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.256309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.256327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.256342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.257290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.259216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.259337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.259371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.391 [2024-10-01 13:44:00.259390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.259423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.259456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.259474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.259489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.259521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.267511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.268283] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.268332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.391 [2024-10-01 13:44:00.268354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.268449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.268489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.268508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.268522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.268573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.269315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.269454] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.269487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.391 [2024-10-01 13:44:00.269506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.269555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.269593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.269613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.269628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.269660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.278455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.278593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.278628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.391 [2024-10-01 13:44:00.278647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.278682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.278714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.278732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.278747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.278779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.279426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.279529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.279574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.391 [2024-10-01 13:44:00.279592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.279626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.391 [2024-10-01 13:44:00.279676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.391 [2024-10-01 13:44:00.279699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.391 [2024-10-01 13:44:00.279713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.391 [2024-10-01 13:44:00.279745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.391 [2024-10-01 13:44:00.289786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.289870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.391 [2024-10-01 13:44:00.289957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.391 [2024-10-01 13:44:00.289989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.391 [2024-10-01 13:44:00.290027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.391 [2024-10-01 13:44:00.290100] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.290130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.392 [2024-10-01 13:44:00.290147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.290166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.290200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.290221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.290236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.290250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.290283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.290303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.290318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.290332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.290362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.299944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.300026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.300112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.300142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.392 [2024-10-01 13:44:00.300161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.301147] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.301192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.392 [2024-10-01 13:44:00.301214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.301235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.301445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.301476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.301492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.301507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.302792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.302831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.302850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.302888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.303766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.310826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.310898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.311024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.311070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.392 [2024-10-01 13:44:00.311100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.311171] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.311204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.392 [2024-10-01 13:44:00.311223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.311259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.311284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.311589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.311628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.311647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.311665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.311682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.311695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.311844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.311871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.320994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.321077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.321166] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.321204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.392 [2024-10-01 13:44:00.321224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.321296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.321325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.392 [2024-10-01 13:44:00.321343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.321363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.321397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.321418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.321458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.321475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.321509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.321530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.321564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.321579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.321611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.332269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.332388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.332498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.332532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.392 [2024-10-01 13:44:00.332570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.332643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.332672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.392 [2024-10-01 13:44:00.332690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.332712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.332745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.332766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.332781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.332798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.332831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.332852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.332866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.332881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.332910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.342559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.342610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.342710] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.342743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.392 [2024-10-01 13:44:00.342761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.342844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.342871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.392 [2024-10-01 13:44:00.342888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.343821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.343868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.344090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.344127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.344145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.344164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.344180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.344194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.344307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.344330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.353516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.353627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.353762] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.353798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.392 [2024-10-01 13:44:00.353818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.353870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.353896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.392 [2024-10-01 13:44:00.353913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.353950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.353974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.354236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.354265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.354282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.354299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.354314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.354328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.354480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.354528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.363711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.363790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.363885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.363918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.392 [2024-10-01 13:44:00.363937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.364007] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.364035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.392 [2024-10-01 13:44:00.364052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.364072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.364105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.364126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.364140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.364155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.364187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.364207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.364231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.364245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.364275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.374848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.374900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.374999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.375032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.392 [2024-10-01 13:44:00.375050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.375100] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.375126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.392 [2024-10-01 13:44:00.375142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.375175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.375199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.392 [2024-10-01 13:44:00.375226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.375243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.375275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.375293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.392 [2024-10-01 13:44:00.375309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.392 [2024-10-01 13:44:00.375323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.392 [2024-10-01 13:44:00.375357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.375377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.392 [2024-10-01 13:44:00.385040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.385094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.392 [2024-10-01 13:44:00.385195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.385228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.392 [2024-10-01 13:44:00.385247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.385298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.392 [2024-10-01 13:44:00.385324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.392 [2024-10-01 13:44:00.385341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.392 [2024-10-01 13:44:00.386272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.386318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.386550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.386587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.386606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.386624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.386640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.386653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.386767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.386789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.395912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.395963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.396064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.396095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.393 [2024-10-01 13:44:00.396113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.396164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.396190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.393 [2024-10-01 13:44:00.396232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.396268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.396291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.396318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.396336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.396351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.396367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.396383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.396397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.396689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.396718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.406214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.406313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.406452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.406489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.393 [2024-10-01 13:44:00.406509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.406581] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.406609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.393 [2024-10-01 13:44:00.406626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.406663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.406688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.406715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.406733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.406749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.406767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.406783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.406796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.406829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.406849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.417379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.417476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.417761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.417809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.393 [2024-10-01 13:44:00.417831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.417885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.417910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.393 [2024-10-01 13:44:00.417927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.417971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.417997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.418025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.418044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.418060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.418079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.418094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.418108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.418141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.418162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.427966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.428018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.428118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.428150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.393 [2024-10-01 13:44:00.428169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.428220] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.428246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.393 [2024-10-01 13:44:00.428263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.429194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.429240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.429460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.429499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.429518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.429567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.429588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.429602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.429717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.429740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.439146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.439229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.439356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.439391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.393 [2024-10-01 13:44:00.439411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.439464] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.439490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.393 [2024-10-01 13:44:00.439506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.439557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.439585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.439614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.439634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.439649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.439667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.439683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.439697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.439729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.439749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.449569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.449619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.449732] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.449764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.393 [2024-10-01 13:44:00.449783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.449834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.449859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.393 [2024-10-01 13:44:00.449900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.449937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.449961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.449988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.450007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.450022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.450039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.450055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.450068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.450100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.450120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.461014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.461069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.461178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.461211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.393 [2024-10-01 13:44:00.461229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.461280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.461306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.393 [2024-10-01 13:44:00.461322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.461356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.461380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.461407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.461425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.461440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.461457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.461473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.461486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.461518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.461552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.471218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.471269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.471388] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.471435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.393 [2024-10-01 13:44:00.471456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.471508] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.471549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.393 [2024-10-01 13:44:00.471571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.472516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.472575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.472781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.472818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.472836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.472854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.472870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.472884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.472996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.473018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.482066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.482116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.482214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.482246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.393 [2024-10-01 13:44:00.482264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.482315] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.482340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.393 [2024-10-01 13:44:00.482357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.482390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.482414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.393 [2024-10-01 13:44:00.482441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.482459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.482473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.482490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.393 [2024-10-01 13:44:00.482524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.393 [2024-10-01 13:44:00.482559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.393 [2024-10-01 13:44:00.482827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.482853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.393 [2024-10-01 13:44:00.492211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.492287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.393 [2024-10-01 13:44:00.492378] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.393 [2024-10-01 13:44:00.492410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.393 [2024-10-01 13:44:00.492429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.393 [2024-10-01 13:44:00.492497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.492525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.394 [2024-10-01 13:44:00.492557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.492578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.492613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.492634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.492648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.492663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.492695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.492715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.492729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.492744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.492773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.503337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.503389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.503486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.503519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.394 [2024-10-01 13:44:00.503552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.503609] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.503635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.394 [2024-10-01 13:44:00.503652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.503706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.503732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.503760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.503778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.503792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.503809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.503825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.503838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.503870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.503903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.513695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.513796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.513945] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.513982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.394 [2024-10-01 13:44:00.514001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.514056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.514081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.394 [2024-10-01 13:44:00.514098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.515072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.515120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.515319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.515355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.515375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.515396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.515412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.515426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.515559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.515584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.524748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.524828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.524956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.525018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.394 [2024-10-01 13:44:00.525040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.525095] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.525122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.394 [2024-10-01 13:44:00.525140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.525178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.525202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.525230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.525248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.525263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.525281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.525296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.525310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.525342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.525362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.535136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.535188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.535286] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.535319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.394 [2024-10-01 13:44:00.535338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.535388] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.535414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.394 [2024-10-01 13:44:00.535430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.535463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.535487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.535515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.535549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.535568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.535585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.535601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.535631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.535667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.535687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.546272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.546326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.546435] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.546468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.394 [2024-10-01 13:44:00.546486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.546551] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.546579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.394 [2024-10-01 13:44:00.546596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.546631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.546654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.546700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.546723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.546737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.546754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.546770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.546785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.546818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.546838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.556723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.556780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.556885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.556924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.394 [2024-10-01 13:44:00.556945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.556997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.557024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.394 [2024-10-01 13:44:00.557041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.557985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.558058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.558268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.558307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.558325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.558344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.558360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.558374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.558489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.558512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.567904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.567962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.568071] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.568106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.394 [2024-10-01 13:44:00.568125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.568176] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.568202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.394 [2024-10-01 13:44:00.568219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.568253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.568276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.568303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.568321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.568336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.568354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.568369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.568383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.568415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.568436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.578349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.578403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.578504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.578550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.394 [2024-10-01 13:44:00.578598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.578656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.578683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.394 [2024-10-01 13:44:00.578700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.578734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.578758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.394 [2024-10-01 13:44:00.578786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.578804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.578819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.578836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.394 [2024-10-01 13:44:00.578852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.394 [2024-10-01 13:44:00.578866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.394 [2024-10-01 13:44:00.578898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.578918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.394 [2024-10-01 13:44:00.589622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.589680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.394 [2024-10-01 13:44:00.589794] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.589826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.394 [2024-10-01 13:44:00.589845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.589895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.394 [2024-10-01 13:44:00.589921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.394 [2024-10-01 13:44:00.589937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.394 [2024-10-01 13:44:00.589988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.590016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.590044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.590063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.590078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.590095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.590111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.590124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.590174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.590195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.600176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.600267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.600402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.600438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.395 [2024-10-01 13:44:00.600459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.600511] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.600554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.395 [2024-10-01 13:44:00.600576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.601517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.601578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.601780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.601818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.601838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.601857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.601873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.601886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.602035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.602062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.611656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.611717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.612625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.612680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.395 [2024-10-01 13:44:00.612704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.612761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.612788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.395 [2024-10-01 13:44:00.612819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.612969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.613022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.613349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.613391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.613410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.613430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.613446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.613459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.613630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.613658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.622836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.622890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.622994] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.623042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.395 [2024-10-01 13:44:00.623078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.623137] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.623163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.395 [2024-10-01 13:44:00.623180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.623216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.623240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.623267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.623285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.623300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.623318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.623333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.623347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.623379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.623399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.634198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.634273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.634393] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.634430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.395 [2024-10-01 13:44:00.634450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.634550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.634580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.395 [2024-10-01 13:44:00.634597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.634635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.634660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.634687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.634706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.634721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.634739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.634756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.634770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.634802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.634829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.644481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.644569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.644712] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.644748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.395 [2024-10-01 13:44:00.644767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.644827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.644853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.395 [2024-10-01 13:44:00.644870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.645807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.645853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.646059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.646096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.646115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.646134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.646151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.646165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.646280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.646326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.655323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.655376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.655479] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.655511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.395 [2024-10-01 13:44:00.655530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.655601] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.655628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.395 [2024-10-01 13:44:00.655645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.655679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.655702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.655730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.655747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.655762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.655779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.655794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.655808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.656087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.656116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 8504.50 IOPS, 33.22 MiB/s [2024-10-01 13:44:00.668092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.668144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.669212] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.669259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.395 [2024-10-01 13:44:00.669281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.669334] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.669359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.395 [2024-10-01 13:44:00.669375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.670218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.670265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.670448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.670504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.670524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.670558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.670577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.670590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.670705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.670728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.679445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.679497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.679628] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.679661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.395 [2024-10-01 13:44:00.679680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.679730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.679756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.395 [2024-10-01 13:44:00.679773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.679807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.679830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.679857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.679886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.679902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.679920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.679936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.679950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.679983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.680003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.690581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.690635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.690735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.690767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.395 [2024-10-01 13:44:00.690786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.690835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.690883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.395 [2024-10-01 13:44:00.690903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.395 [2024-10-01 13:44:00.690936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.690960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.395 [2024-10-01 13:44:00.690987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.691005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.691020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.691037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.395 [2024-10-01 13:44:00.691052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.395 [2024-10-01 13:44:00.691066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.395 [2024-10-01 13:44:00.691097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.691118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.395 [2024-10-01 13:44:00.700724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.700801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.395 [2024-10-01 13:44:00.700885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.395 [2024-10-01 13:44:00.700916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.395 [2024-10-01 13:44:00.700935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.701922] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.701967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.396 [2024-10-01 13:44:00.701989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.702009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.702200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.702240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.702259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.702273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.703566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.703605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.703624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.703639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.704521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.711523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.711591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.711694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.711727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.396 [2024-10-01 13:44:00.711746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.711797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.711823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.396 [2024-10-01 13:44:00.711839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.711884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.711910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.711939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.711957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.711972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.711989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.712005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.712019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.712292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.712319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.721721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.721776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.721878] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.721910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.396 [2024-10-01 13:44:00.721929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.721980] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.722005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.396 [2024-10-01 13:44:00.722022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.722056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.722081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.722107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.722126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.722162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.722181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.722197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.722210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.722243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.722263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.732749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.732806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.732910] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.732943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.396 [2024-10-01 13:44:00.732961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.733012] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.733037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.396 [2024-10-01 13:44:00.733054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.733088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.733112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.733140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.733157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.733172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.733189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.733205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.733218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.733250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.733270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.742898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.742984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.743073] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.743104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.396 [2024-10-01 13:44:00.743123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.744127] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.744173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.396 [2024-10-01 13:44:00.744218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.744240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.744442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.744483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.744501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.744517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.744647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.744672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.744687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.744701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.745952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.753856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.753930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.754056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.754091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.396 [2024-10-01 13:44:00.754111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.754161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.754187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.396 [2024-10-01 13:44:00.754204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.754239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.754264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.754291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.754309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.754325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.754343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.754359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.754373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.754406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.754427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.765156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.765276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.765417] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.765454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.396 [2024-10-01 13:44:00.765473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.765526] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.765569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.396 [2024-10-01 13:44:00.765587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.765624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.765649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.765676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.765695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.765711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.765730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.765745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.765759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.765792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.765812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.776936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.777003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.777138] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.777174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.396 [2024-10-01 13:44:00.777193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.777246] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.777271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.396 [2024-10-01 13:44:00.777288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.777340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.777369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.777398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.777416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.777431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.777466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.777485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.777499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.777548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.777581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.788115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.788180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.789214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.789260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.396 [2024-10-01 13:44:00.789282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.789337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.789363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.396 [2024-10-01 13:44:00.789380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.789638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.789675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.396 [2024-10-01 13:44:00.789785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.789807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.789823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.789841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.396 [2024-10-01 13:44:00.789857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.396 [2024-10-01 13:44:00.789871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.396 [2024-10-01 13:44:00.791230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.791268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.396 [2024-10-01 13:44:00.798943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.799001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.396 [2024-10-01 13:44:00.799108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.799142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.396 [2024-10-01 13:44:00.799161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.396 [2024-10-01 13:44:00.799214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.396 [2024-10-01 13:44:00.799240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.397 [2024-10-01 13:44:00.799257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.799317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.799342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.799370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.799387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.799402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.799420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.799435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.799449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.799737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.799766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.809083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.809167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.809266] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.809297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.397 [2024-10-01 13:44:00.809316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.809385] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.809413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.397 [2024-10-01 13:44:00.809431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.809450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.809483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.809504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.809519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.809548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.809585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.809607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.809622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.809637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.809667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.820241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.820301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.820433] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.820477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.397 [2024-10-01 13:44:00.820498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.820565] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.820593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.397 [2024-10-01 13:44:00.820610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.820645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.820669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.820696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.820715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.820729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.820746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.820762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.820776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.820808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.820828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.830573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.830628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.830733] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.830765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.397 [2024-10-01 13:44:00.830784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.830835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.830861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.397 [2024-10-01 13:44:00.830878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.831815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.831860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.832084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.832123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.832142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.832160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.832176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.832213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.832331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.832354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.841489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.841569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.841686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.841721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.397 [2024-10-01 13:44:00.841748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.841836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.841895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.397 [2024-10-01 13:44:00.841930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.842224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.842270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.842420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.842457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.842475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.842495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.842511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.842525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.842656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.842681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.851662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.851769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.851871] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.851919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.397 [2024-10-01 13:44:00.851938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.852011] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.852040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.397 [2024-10-01 13:44:00.852057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.852077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.852159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.852187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.852203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.852219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.852253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.852274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.852288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.852302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.852332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.864371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.864455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.864602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.864639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.397 [2024-10-01 13:44:00.864659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.864713] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.864739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.397 [2024-10-01 13:44:00.864756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.864792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.864816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.864844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.864863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.864879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.864897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.864913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.864927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.864959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.864979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.875520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.875633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.875775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.875812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.397 [2024-10-01 13:44:00.875867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.875948] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.875975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.397 [2024-10-01 13:44:00.875992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.876956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.877006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.877215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.877255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.877275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.877294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.877310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.877323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.877465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.877491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.887337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.887402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.887583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.887629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.397 [2024-10-01 13:44:00.887651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.887705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.887731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf2e0 with addr=10.0.0.3, port=4420 00:16:17.397 [2024-10-01 13:44:00.887748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf2e0 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.887784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.887808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf2e0 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.887835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.887853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.887869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.887899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.887915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.887949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.888225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.888262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.897744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.897802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.897972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.898006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.397 [2024-10-01 13:44:00.898026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.898090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.898134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.898154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.898170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.397 [2024-10-01 13:44:00.898203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.397 [2024-10-01 13:44:00.909279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.397 [2024-10-01 13:44:00.909434] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.397 [2024-10-01 13:44:00.909469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.397 [2024-10-01 13:44:00.909488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.397 [2024-10-01 13:44:00.909562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.397 [2024-10-01 13:44:00.909607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.397 [2024-10-01 13:44:00.909626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.397 [2024-10-01 13:44:00.909641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:00.909694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:00.912511] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:17.398 [2024-10-01 13:44:00.922263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:00.922614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:00.922661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:00.922684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:00.922804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:00.922899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:00.922934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:00.922952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:00.923015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:00.933047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:00.933178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:00.933223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:00.933244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:00.933283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:00.933321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:00.933339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:00.933354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:00.933391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:00.943718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:00.943864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:00.943920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:00.943942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:00.943982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:00.944020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:00.944038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:00.944053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:00.944091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:00.956575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:00.957395] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:00.957449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:00.957472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:00.957818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:00.958018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:00.958056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:00.958076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:00.958225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:00.967931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:00.968244] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:00.968294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:00.968347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:00.968436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:00.968506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:00.968547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:00.968567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:00.969680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:00.978166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:00.978297] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:00.978337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:00.978356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:00.978395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:00.978433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:00.978451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:00.978466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:00.978503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:00.988275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:00.988400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:00.988435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:00.988454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:00.989796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:00.990783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:00.990825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:00.990844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:00.990971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:00.999413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:01.000702] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:01.000754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:01.000778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:01.001411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:01.001531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:01.001609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:01.001629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:01.001673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:01.009589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:01.009791] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:01.009828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:01.009847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:01.009889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:01.009928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:01.009946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:01.009962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:01.010000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:01.020431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:01.020642] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:01.020680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:01.020700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:01.020742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:01.020781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:01.020800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:01.020816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:01.020854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:01.030751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:01.030951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:01.030988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:01.031009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:01.031051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:01.031089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:01.031108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:01.031124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:01.031400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:01.040985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:01.041933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:01.041982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:01.042005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:01.042195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:01.042297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:01.042320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:01.042337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:01.042376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:01.051734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:01.051978] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:01.052017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:01.052037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:01.052079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:01.052118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:01.052137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:01.052153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:01.053126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:01.061920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:01.062120] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:01.062156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:01.062175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:01.062216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:01.062279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:01.062304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:01.062321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:01.062359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:01.073327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:01.073528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:01.073579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:01.073600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:01.074749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:01.075427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:01.075467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:01.075487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:01.075586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:01.083489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:01.083668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:01.083704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:01.083724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:01.083763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:01.083801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:01.083820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:01.083835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:01.085072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:01.094101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:01.094289] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:01.094326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:01.094345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:01.094388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:01.094427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:01.094445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:01.094461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:01.094498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:01.104479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:01.104689] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:01.104726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:01.104747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:01.104789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:01.104827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:01.104846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:01.104893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:01.105172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:01.115364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:01.115584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:01.115622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:01.115641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:01.115685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.398 [2024-10-01 13:44:01.115723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.398 [2024-10-01 13:44:01.115741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.398 [2024-10-01 13:44:01.115757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.398 [2024-10-01 13:44:01.116902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.398 [2024-10-01 13:44:01.126495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.398 [2024-10-01 13:44:01.126723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.398 [2024-10-01 13:44:01.126761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.398 [2024-10-01 13:44:01.126780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.398 [2024-10-01 13:44:01.126823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.126861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.126880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.126897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.126934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.137762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.137983] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.138020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.138040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.138083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.138121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.138139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.138156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.138194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.148183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.148380] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.148448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.148471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.148513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.148807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.148847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.148867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.149023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.159177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.159367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.159404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.159424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.159465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.159502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.159520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.159553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.160682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.170392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.170604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.170640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.170661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.170703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.170741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.170760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.170776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.170814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.181694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.181885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.181922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.181942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.181983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.182053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.182073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.182088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.182127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.191915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.192059] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.192094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.192113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.192152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.192189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.192208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.192223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.192259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.202927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.203125] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.203161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.203181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.203223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.203262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.203280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.203296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.203334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.214155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.214358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.214396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.214417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.214458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.214496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.214516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.214531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.214617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.225201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.225355] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.225391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.225410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.225450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.225506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.225529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.225565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.225605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.235622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.235831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.235867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.235905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.235948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.235986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.236005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.236031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.236069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.246838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.247039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.247076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.247096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.247144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.248310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.248355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.248376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.248617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.257781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.257913] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.257948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.257999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.258041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.258079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.258097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.258113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.258165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.269600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.269895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.269947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.269978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.270941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.271228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.271286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.271320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.272514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.280595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.280921] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.280987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.281022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.281097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.281158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.281193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.281232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.281291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.292010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.293212] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.293263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.293285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.293939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.294055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.294109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.294136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.294188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.303458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.304354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.304403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.304426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.304625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.304717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.304745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.304761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.304801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.313891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.314079] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.314137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.314173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.315389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.315733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.315779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.315799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.315851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.324053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.399 [2024-10-01 13:44:01.324256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.399 [2024-10-01 13:44:01.324310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.399 [2024-10-01 13:44:01.324332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.399 [2024-10-01 13:44:01.325823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.399 [2024-10-01 13:44:01.326940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.399 [2024-10-01 13:44:01.326990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.399 [2024-10-01 13:44:01.327012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.399 [2024-10-01 13:44:01.327160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.399 [2024-10-01 13:44:01.334272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.334475] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.334525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.334586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.335897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.336225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.336271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.336291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.337423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.344444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.344680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.344719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.344739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.345713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.345950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.345988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.346008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.346058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.354628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.356163] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.356215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.356238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.357207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.357365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.357403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.357423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.357465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.366008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.366151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.366189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.366208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.367358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.368045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.368087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.368106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.368218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.376122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.376246] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.376281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.376299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.376337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.376391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.376414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.376428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.377657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.386458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.386594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.386629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.386648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.386687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.386724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.386742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.386756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.386793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.396585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.396707] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.396740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.396758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.396795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.396832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.396850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.396885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.396925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.407213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.407337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.407370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.407388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.407426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.407462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.407480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.407495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.407531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.418151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.418275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.418308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.418327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.418365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.418402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.418420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.418434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.418470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.429505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.429644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.429679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.429698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.429737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.429774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.429793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.429807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.429844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.439686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.439834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.439868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.439900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.439939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.439976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.439994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.440008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.440044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.449836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.449961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.449994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.450013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.450050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.450087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.450105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.450120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.450156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.460318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.460440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.460473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.460491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.460529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.460587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.460606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.460621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.460657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.471385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.471514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.471564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.471585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.471624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.471685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.471705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.471720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.471757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.482434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.482629] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.482665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.482685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.482725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.482762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.482781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.482797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.482833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.493479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.493617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.493661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.493682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.493721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.493758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.493776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.493790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.493826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.503653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.503784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.400 [2024-10-01 13:44:01.503823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.400 [2024-10-01 13:44:01.503843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.400 [2024-10-01 13:44:01.503892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.400 [2024-10-01 13:44:01.503931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.400 [2024-10-01 13:44:01.503949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.400 [2024-10-01 13:44:01.503964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.400 [2024-10-01 13:44:01.504031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.400 [2024-10-01 13:44:01.514310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.400 [2024-10-01 13:44:01.514441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.514480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.514499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.514551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.514593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.514612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.514626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.514664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.525190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.525313] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.525346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.525365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.525404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.525441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.525459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.525474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.525509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.536145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.536278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.536312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.536331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.536369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.536406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.536425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.536439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.536475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.546284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.546410] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.546443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.546491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.546532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.546588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.546606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.546621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.546657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.557019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.557153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.557192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.557212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.557250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.557287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.557306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.557320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.557357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.568085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.568218] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.568252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.568271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.568309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.568347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.568365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.568379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.568416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.579176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.579357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.579394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.579413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.579453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.579491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.579558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.579578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.579618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.589319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.589442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.589475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.589493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.589531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.589587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.589606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.589621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.589658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.601691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.602003] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.602051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.602073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.603181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.603848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.603900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.603921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.604265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.613896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.614205] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.614253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.614276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.614361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.614403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.614426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.614442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.615564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.624960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.625104] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.625139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.625159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.625199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.625245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.625263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.625278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.625314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.635994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.636141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.636175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.636194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.636233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.636270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.636289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.636304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.636341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.646122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.646249] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.646283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.646302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.646340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.646376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.646395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.646409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.646445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.656949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.657078] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.657111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.657131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.657199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.657238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.657256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.657271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.657307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 8535.44 IOPS, 33.34 MiB/s [2024-10-01 13:44:01.668586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.670102] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.670161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.670187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.670410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.671204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.671245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.671265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.671453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.679109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.679295] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.679346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.679381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.679439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.679496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.679532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.679588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.680374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.689256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.689450] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.689488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.689508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.689797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.689977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.690011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.690061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.690183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.699698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.401 [2024-10-01 13:44:01.699831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.401 [2024-10-01 13:44:01.699866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.401 [2024-10-01 13:44:01.699902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.401 [2024-10-01 13:44:01.699942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.401 [2024-10-01 13:44:01.699980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.401 [2024-10-01 13:44:01.699998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.401 [2024-10-01 13:44:01.700013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.401 [2024-10-01 13:44:01.701144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.401 [2024-10-01 13:44:01.710684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.710814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.710849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.710867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.710906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.710942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.710961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.710976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.711012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.721799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.721929] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.721963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.721982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.722021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.722057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.722075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.722090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.722126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.731963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.732121] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.732167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.732185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.732223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.732277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.732300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.732315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.732352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.742717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.742912] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.742949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.742968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.743010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.743047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.743066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.743082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.744211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.753827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.754025] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.754061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.754081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.754123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.754160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.754179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.754194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.754232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.764864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.764989] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.765022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.765041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.765078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.765157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.765179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.765194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.765230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.774989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.775119] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.775152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.775178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.775216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.775253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.775271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.775285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.775322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.785631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.785754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.785787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.785805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.785843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.785880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.785898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.785912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.785949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.796477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.796614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.796654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.796674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.796712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.796749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.796767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.796782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.796843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.807477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.807614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.807649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.807668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.807707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.807743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.807761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.807776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.807811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.817603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.817724] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.817757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.817776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.817814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.817851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.817870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.817884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.817921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.828241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.828371] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.828413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.828433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.828472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.828509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.828527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.828561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.828601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.839144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.839267] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.839301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.839341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.839382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.839420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.839438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.839452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.839489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.850150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.850275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.850318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.850340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.850378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.850416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.850434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.850448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.850485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.860258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.860382] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.860414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.860433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.860470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.860508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.860526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.860558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.860598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.870876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.871003] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.871035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.871054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.871091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.871126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.871165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.871181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.871218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.881829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.881954] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.881987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.882005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.882043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.882080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.882097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.882111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.882147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.892771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.892891] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.892924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.892942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.892979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.893015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.893031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.893045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.893080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.902876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.902998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.903030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.903048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.903085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.903121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.402 [2024-10-01 13:44:01.903138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.402 [2024-10-01 13:44:01.903153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.402 [2024-10-01 13:44:01.903188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.402 [2024-10-01 13:44:01.913492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.402 [2024-10-01 13:44:01.913629] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.402 [2024-10-01 13:44:01.913663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.402 [2024-10-01 13:44:01.913682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.402 [2024-10-01 13:44:01.913719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.402 [2024-10-01 13:44:01.913756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:01.913774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:01.913788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:01.913824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:01.924345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:01.924466] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:01.924498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:01.924516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:01.924571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:01.924611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:01.924630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:01.924644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:01.924679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:01.935301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:01.935424] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:01.935456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:01.935474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:01.935512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:01.935563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:01.935583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:01.935597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:01.935633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:01.945400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:01.945522] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:01.945570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:01.945612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:01.945654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:01.945691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:01.945709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:01.945723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:01.945759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:01.956013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:01.956138] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:01.956171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:01.956190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:01.956227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:01.956264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:01.956282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:01.956296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:01.956332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:01.966881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:01.967013] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:01.967046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:01.967065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:01.967102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:01.967139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:01.967156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:01.967170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:01.967207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:01.977851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:01.977974] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:01.978007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:01.978025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:01.978063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:01.978099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:01.978117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:01.978159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:01.978198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:01.987961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:01.988084] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:01.988116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:01.988135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:01.988172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:01.988208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:01.988226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:01.988241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:01.988277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:01.998570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:01.998694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:01.998726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:01.998744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:01.998782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:01.998819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:01.998836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:01.998850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:01.998886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:02.009469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:02.009616] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:02.009656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:02.009676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:02.009714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:02.009751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:02.009769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:02.009784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:02.009821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:02.020413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:02.020580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:02.020625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:02.020644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:02.020683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:02.020721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:02.020738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:02.020752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:02.020789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:02.030559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:02.030682] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:02.030715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:02.030734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:02.030772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:02.030808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:02.030826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:02.030841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:02.030877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:02.041130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:02.041253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:02.041287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:02.041305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:02.041344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:02.041380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:02.041398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:02.041412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:02.041449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:02.052009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:02.052131] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:02.052164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:02.052182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:02.052239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:02.052277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:02.052296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:02.052310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:02.052346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:02.062939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:02.063062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:02.063096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:02.063114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:02.063151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:02.063188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:02.063205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:02.063219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:02.063255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:02.073050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:02.073171] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:02.073203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:02.073222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:02.073260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:02.073296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:02.073314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:02.073329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:02.073365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:02.083739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:02.083886] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:02.083922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:02.083941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:02.083980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:02.084018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:02.084036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:02.084050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:02.084107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:02.093853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:02.094913] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:02.094960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:02.094989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:02.095217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:02.095291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:02.095314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:02.095336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:02.095376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:02.106348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:02.106480] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:02.106514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:02.106547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:02.106591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:02.106628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:02.106647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:02.106662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:02.106711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:02.116741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:02.116872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.403 [2024-10-01 13:44:02.116906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.403 [2024-10-01 13:44:02.116924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.403 [2024-10-01 13:44:02.116962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.403 [2024-10-01 13:44:02.116999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.403 [2024-10-01 13:44:02.117017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.403 [2024-10-01 13:44:02.117031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.403 [2024-10-01 13:44:02.117068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.403 [2024-10-01 13:44:02.127917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.403 [2024-10-01 13:44:02.128221] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.128292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.404 [2024-10-01 13:44:02.128315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.128404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.128446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.128465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.128479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.128516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.138034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.138171] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.138205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.404 [2024-10-01 13:44:02.138224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.138263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.138299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.138318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.138332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.138368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.148139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.148264] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.148310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.404 [2024-10-01 13:44:02.148329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.148366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.148402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.148419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.148433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.148469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.159408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.159547] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.159582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.404 [2024-10-01 13:44:02.159601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.160704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.161381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.161422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.161440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.161547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.169519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.169656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.169690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.404 [2024-10-01 13:44:02.169709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.169752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.169791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.169809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.169824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.169859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.180286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.180413] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.180448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.404 [2024-10-01 13:44:02.180467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.180505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.180557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.180578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.180593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.180629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.180988] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb23a00 was disconnected and freed. reset controller. 00:16:17.404 [2024-10-01 13:44:02.181054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.181129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.184489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.184564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.184587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.184602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.184636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.191155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.191309] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.191344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.404 [2024-10-01 13:44:02.191362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.191412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.191455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.191489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.191507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.191521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.191568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.191633] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.191661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.404 [2024-10-01 13:44:02.191678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.191973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.192144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.192180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.192197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.192310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.202201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.202256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.202358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.202391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.404 [2024-10-01 13:44:02.202409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.202460] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.202496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.404 [2024-10-01 13:44:02.202516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.202566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.202593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.203711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.203751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.203769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.203807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.203827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.203840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.204070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.204099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.212337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.212420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.212509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.212557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.404 [2024-10-01 13:44:02.212579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.212660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.212691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.404 [2024-10-01 13:44:02.212709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.212729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.213677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.213720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.213739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.213753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.213947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.213974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.213989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.214004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.214044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.222447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.222583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.222622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.404 [2024-10-01 13:44:02.222642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.222693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.222735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.222768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.222808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.222824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.224198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.224299] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.224332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.404 [2024-10-01 13:44:02.224350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.225297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.225445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.225472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.225488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.225523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.233741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.233812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.233927] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.233961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.404 [2024-10-01 13:44:02.233980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.234032] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.234058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.404 [2024-10-01 13:44:02.234081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.235181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.235236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.404 [2024-10-01 13:44:02.235925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.235967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.235987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.236006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.404 [2024-10-01 13:44:02.236021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.404 [2024-10-01 13:44:02.236035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.404 [2024-10-01 13:44:02.236368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.236407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.404 [2024-10-01 13:44:02.243936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.244044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.404 [2024-10-01 13:44:02.244160] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.244192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.404 [2024-10-01 13:44:02.244212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.244283] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.404 [2024-10-01 13:44:02.244311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.404 [2024-10-01 13:44:02.244328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.404 [2024-10-01 13:44:02.244347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.245622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.245670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.245689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.245704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.245940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.245970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.245993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.246009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.246804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.254676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.254736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.254861] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.254896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.405 [2024-10-01 13:44:02.254916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.254968] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.255006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.405 [2024-10-01 13:44:02.255027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.255068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.255094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.255121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.255138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.255153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.255170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.255212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.255231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.255267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.255288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.265298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.265356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.265460] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.265503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.405 [2024-10-01 13:44:02.265524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.265595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.265623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.405 [2024-10-01 13:44:02.265641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.265675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.265699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.265973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.266016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.266046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.266073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.266097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.266111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.266251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.266287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.276288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.276342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.276450] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.276484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.405 [2024-10-01 13:44:02.276502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.276573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.276601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.405 [2024-10-01 13:44:02.276618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.276656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.276725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.277856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.277904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.277934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.277956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.277972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.277986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.278233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.278272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.287353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.287636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.287746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.287780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.405 [2024-10-01 13:44:02.287799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.287893] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.287924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.405 [2024-10-01 13:44:02.287942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.287961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.287996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.288017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.288031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.288046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.288095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.288123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.288138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.288152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.288184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.298805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.298859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.298961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.299012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.405 [2024-10-01 13:44:02.299043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.299096] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.299122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.405 [2024-10-01 13:44:02.299138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.299172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.299195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.299222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.299239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.299254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.299272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.299287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.299301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.299333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.299353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.309355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.309407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.309508] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.309566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.405 [2024-10-01 13:44:02.309589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.309648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.309674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.405 [2024-10-01 13:44:02.309691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.309726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.309751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.309778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.309803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.309822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.309839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.309854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.309890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.310158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.310197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.320263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.320501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.320618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.320652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.405 [2024-10-01 13:44:02.320670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.320796] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.320832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.405 [2024-10-01 13:44:02.320849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.320868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.321975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.322023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.322042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.322056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.322295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.322324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.322340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.322355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.323441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.330364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.330487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.330521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.405 [2024-10-01 13:44:02.330556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.331492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.331772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.331813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.331831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.331890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.331919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.332026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.332059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.405 [2024-10-01 13:44:02.332077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.332110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.332142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.332160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.332174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.332205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.340458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.340600] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.340634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.405 [2024-10-01 13:44:02.340653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.342003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.342986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.343028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.343047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.343178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.343216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.343301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.343334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.405 [2024-10-01 13:44:02.343352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.343385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.343417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.405 [2024-10-01 13:44:02.343435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.405 [2024-10-01 13:44:02.343449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.405 [2024-10-01 13:44:02.343488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.405 [2024-10-01 13:44:02.351383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.405 [2024-10-01 13:44:02.351507] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.405 [2024-10-01 13:44:02.351553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.405 [2024-10-01 13:44:02.351575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.405 [2024-10-01 13:44:02.352691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.405 [2024-10-01 13:44:02.353371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.353412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.353431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.353552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.353598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.353692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.353724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.406 [2024-10-01 13:44:02.353743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.354023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.354191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.354237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.354255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.354368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.361487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.361624] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.361659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.406 [2024-10-01 13:44:02.361678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.361712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.361755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.361776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.361790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.361823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.364370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.364492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.364552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.406 [2024-10-01 13:44:02.364575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.364610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.364643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.364661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.364692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.364727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.372170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.372298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.372332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.406 [2024-10-01 13:44:02.372350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.372384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.372417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.372435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.372450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.372482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.375418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.375550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.375585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.406 [2024-10-01 13:44:02.375603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.375637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.375670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.375688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.375702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.375734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.382338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.382473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.382507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.406 [2024-10-01 13:44:02.382526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.382581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.382615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.382633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.382647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.382679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.386583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.386705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.386758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.406 [2024-10-01 13:44:02.386780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.386815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.386848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.386866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.386880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.386912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.393303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.393429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.393475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.406 [2024-10-01 13:44:02.393508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.393562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.393599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.393617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.393631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.393663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.396903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.397024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.397057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.406 [2024-10-01 13:44:02.397075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.397121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.397158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.397176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.397191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.397230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.404389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.404514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.404569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.406 [2024-10-01 13:44:02.404592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.404627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.404685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.404705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.404719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.404752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.407787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.407922] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.407965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.406 [2024-10-01 13:44:02.407985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.408019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.408053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.408072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.408086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.408118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.415503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.415642] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.415687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.406 [2024-10-01 13:44:02.415708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.415742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.415775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.415792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.415806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.415839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.418893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.419024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.419057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.406 [2024-10-01 13:44:02.419075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.419108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.419140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.419158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.419173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.419222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.425948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.426071] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.426105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.406 [2024-10-01 13:44:02.426124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.426166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.426206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.426224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.426239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.426271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.430171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.430297] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.430337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.406 [2024-10-01 13:44:02.430357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.430390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.430422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.430440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.430455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.430488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.436800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.436926] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.436961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.406 [2024-10-01 13:44:02.436980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.437014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.437047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.437065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.437079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.437112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.440482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.440625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.440660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.406 [2024-10-01 13:44:02.440699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.440736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.441055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.441100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.441119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.441258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.447750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.406 [2024-10-01 13:44:02.447895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.406 [2024-10-01 13:44:02.447930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.406 [2024-10-01 13:44:02.447948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.406 [2024-10-01 13:44:02.447983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.406 [2024-10-01 13:44:02.448022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.406 [2024-10-01 13:44:02.448046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.406 [2024-10-01 13:44:02.448060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.406 [2024-10-01 13:44:02.448094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.406 [2024-10-01 13:44:02.451174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.451295] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.451328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.407 [2024-10-01 13:44:02.451347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.451381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.451413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.451431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.451445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.451476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.458972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.459101] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.459146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.407 [2024-10-01 13:44:02.459165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.459200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.459233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.459273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.459289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.459323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.462171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.462290] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.462324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.407 [2024-10-01 13:44:02.462342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.462376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.462408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.462426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.462440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.462472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.469187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.469312] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.469346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.407 [2024-10-01 13:44:02.469365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.469399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.469432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.469449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.469464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.469496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.473428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.473609] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.473651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.407 [2024-10-01 13:44:02.473673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.473712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.473746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.473764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.473780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.473813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.480184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.480362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.480398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.407 [2024-10-01 13:44:02.480418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.480456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.480498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.480517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.480532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.481664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.483814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.483948] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.483988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.407 [2024-10-01 13:44:02.484008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.484042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.484074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.484092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.484106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.484139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.491190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.491312] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.491345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.407 [2024-10-01 13:44:02.491364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.491398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.491430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.491448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.491463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.491495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.494591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.494709] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.494742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.407 [2024-10-01 13:44:02.494760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.494811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.494844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.494882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.494904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.494938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.502372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.502504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.502551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.407 [2024-10-01 13:44:02.502573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.502608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.502653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.502673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.502697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.502735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.505654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.505774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.505806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.407 [2024-10-01 13:44:02.505825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.505858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.505891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.505909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.505924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.505960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.512591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.512719] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.512765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.407 [2024-10-01 13:44:02.512787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.512823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.512856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.512874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.512909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.512945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.516782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.516911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.516945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.407 [2024-10-01 13:44:02.516964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.516998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.517035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.517058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.517072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.517105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.523379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.523565] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.523601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.407 [2024-10-01 13:44:02.523620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.523657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.523691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.523709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.523724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.523759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.527025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.527184] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.527219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.407 [2024-10-01 13:44:02.527239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.527276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.527308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.527326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.527341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.527378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.533527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.534716] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.534816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.407 [2024-10-01 13:44:02.534844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.535077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.535131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.535152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.535168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.535204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.538146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.538315] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.538349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.407 [2024-10-01 13:44:02.538367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.538402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.538443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.538463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.538487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.538525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.546073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.546202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.546238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.407 [2024-10-01 13:44:02.546257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.546291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.546323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.546341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.546355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.546388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.549181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.549487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.549548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.407 [2024-10-01 13:44:02.549573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.549618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.407 [2024-10-01 13:44:02.549681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.407 [2024-10-01 13:44:02.549703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.407 [2024-10-01 13:44:02.549717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.407 [2024-10-01 13:44:02.549751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.407 [2024-10-01 13:44:02.556520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.407 [2024-10-01 13:44:02.556670] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.407 [2024-10-01 13:44:02.556704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.407 [2024-10-01 13:44:02.556723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.407 [2024-10-01 13:44:02.556759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.556793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.556820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.556835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.556869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.560713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.408 [2024-10-01 13:44:02.560845] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.408 [2024-10-01 13:44:02.560879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.408 [2024-10-01 13:44:02.560897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.408 [2024-10-01 13:44:02.560931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.560963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.560981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.560996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.561028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.567414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.408 [2024-10-01 13:44:02.567565] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.408 [2024-10-01 13:44:02.567601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.408 [2024-10-01 13:44:02.567629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.408 [2024-10-01 13:44:02.567669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.567713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.567732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.567746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.567802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.571080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.408 [2024-10-01 13:44:02.571208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.408 [2024-10-01 13:44:02.571242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.408 [2024-10-01 13:44:02.571261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.408 [2024-10-01 13:44:02.571302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.571335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.571353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.571367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.571399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.578558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.408 [2024-10-01 13:44:02.578705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.408 [2024-10-01 13:44:02.578743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.408 [2024-10-01 13:44:02.578761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.408 [2024-10-01 13:44:02.578797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.578830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.578848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.578863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.578896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.581965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.408 [2024-10-01 13:44:02.582092] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.408 [2024-10-01 13:44:02.582132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.408 [2024-10-01 13:44:02.582151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.408 [2024-10-01 13:44:02.582185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.582217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.582235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.582249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.582282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.589788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.408 [2024-10-01 13:44:02.589915] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.408 [2024-10-01 13:44:02.589956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.408 [2024-10-01 13:44:02.590002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.408 [2024-10-01 13:44:02.590039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.590072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.590090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.590104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.590137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.593171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.408 [2024-10-01 13:44:02.593293] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.408 [2024-10-01 13:44:02.593327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.408 [2024-10-01 13:44:02.593346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.408 [2024-10-01 13:44:02.593380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.593412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.593430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.593445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.593483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.600176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.408 [2024-10-01 13:44:02.600340] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.408 [2024-10-01 13:44:02.600400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.408 [2024-10-01 13:44:02.600436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.408 [2024-10-01 13:44:02.600491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.600562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.600595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.600622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.600961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.604416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.408 [2024-10-01 13:44:02.604582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.408 [2024-10-01 13:44:02.604640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.408 [2024-10-01 13:44:02.604676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.408 [2024-10-01 13:44:02.604731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.604782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.604841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.604868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.604920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.611198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.408 [2024-10-01 13:44:02.611347] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.408 [2024-10-01 13:44:02.611404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.408 [2024-10-01 13:44:02.611439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.408 [2024-10-01 13:44:02.611494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.611567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.611602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.611626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.612805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.614863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.408 [2024-10-01 13:44:02.615010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.408 [2024-10-01 13:44:02.615068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.408 [2024-10-01 13:44:02.615102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.408 [2024-10-01 13:44:02.615157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.615209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.615239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.615264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.615608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.622362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.408 [2024-10-01 13:44:02.622514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.408 [2024-10-01 13:44:02.622587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.408 [2024-10-01 13:44:02.622623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.408 [2024-10-01 13:44:02.622679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.622733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.622764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.622790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.622841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.625749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.408 [2024-10-01 13:44:02.625898] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.408 [2024-10-01 13:44:02.625955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.408 [2024-10-01 13:44:02.625990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.408 [2024-10-01 13:44:02.626044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.626097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.626128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.626152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.627309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.633487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.408 [2024-10-01 13:44:02.633703] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.408 [2024-10-01 13:44:02.633780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.408 [2024-10-01 13:44:02.633817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.408 [2024-10-01 13:44:02.633876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.633928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.633959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.633985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.634038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.636936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.408 [2024-10-01 13:44:02.637107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.408 [2024-10-01 13:44:02.637166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.408 [2024-10-01 13:44:02.637203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.408 [2024-10-01 13:44:02.637258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.408 [2024-10-01 13:44:02.637310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.408 [2024-10-01 13:44:02.637342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.408 [2024-10-01 13:44:02.637366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.408 [2024-10-01 13:44:02.637418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.408 [2024-10-01 13:44:02.644041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.644279] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.644331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.409 [2024-10-01 13:44:02.644365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.644479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.644827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.644871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.644903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.645135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.648399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.648587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.648643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.409 [2024-10-01 13:44:02.648679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.648733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.648786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.648818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.648843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.648893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.654504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.654668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.654714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.409 [2024-10-01 13:44:02.654747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.655570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.655805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.655848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.655890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.656018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.659249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.659400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.659456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.409 [2024-10-01 13:44:02.659492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.659565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.659621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.659652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.659711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.659768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 8565.90 IOPS, 33.46 MiB/s [2024-10-01 13:44:02.668108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.669667] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.669723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.409 [2024-10-01 13:44:02.669756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.670078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.670917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.670983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.671018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.671043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.672367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.672480] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.672525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.409 [2024-10-01 13:44:02.672574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.672829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.673959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.674002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.674033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.674747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.678665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.678815] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.678863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.409 [2024-10-01 13:44:02.678895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.678950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.679004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.679034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.679060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.679111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.682090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.682267] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.682324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.409 [2024-10-01 13:44:02.682359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.682413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.682466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.682497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.682522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.682595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.689215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.689367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.689426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.409 [2024-10-01 13:44:02.689463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.689517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.689591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.689623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.689648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.689981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.693419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.693583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.693642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.409 [2024-10-01 13:44:02.693678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.693733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.693785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.693816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.693840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.693891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.699345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.700250] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.700304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.409 [2024-10-01 13:44:02.700337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.700580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.700726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.700772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.700804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.701953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.704029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.704179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.704231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.409 [2024-10-01 13:44:02.704267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.704321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.704374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.704405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.704430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.704776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.710087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.710237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.710295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.409 [2024-10-01 13:44:02.710330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.710385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.710438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.710468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.710493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.711499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.714865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.715166] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.715219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.409 [2024-10-01 13:44:02.715253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.715368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.715424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.715456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.715481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.716668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.720192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.720339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.720396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.409 [2024-10-01 13:44:02.720431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.720485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.720554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.720588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.720613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.720665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.724974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.725123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.725181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.409 [2024-10-01 13:44:02.725215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.726204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.726486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.726529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.726580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.726647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.731515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.731692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.731746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.409 [2024-10-01 13:44:02.731780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.409 [2024-10-01 13:44:02.732914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.409 [2024-10-01 13:44:02.733610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.409 [2024-10-01 13:44:02.733654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.409 [2024-10-01 13:44:02.733684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.409 [2024-10-01 13:44:02.733794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.409 [2024-10-01 13:44:02.735084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.409 [2024-10-01 13:44:02.735247] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.409 [2024-10-01 13:44:02.735304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.409 [2024-10-01 13:44:02.735366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.736766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.737768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.737813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.737844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.738008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.741643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.741795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.741852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.410 [2024-10-01 13:44:02.741887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.741941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.742015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.742048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.742073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.742125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.746241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.746391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.746448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.410 [2024-10-01 13:44:02.746483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.747623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.748354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.748400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.748430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.748570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.752405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.752570] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.752627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.410 [2024-10-01 13:44:02.752662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.752717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.752770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.752831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.752856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.752909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.756346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.756483] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.756518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.410 [2024-10-01 13:44:02.756555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.756594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.756627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.756646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.756661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.756693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.762778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.762901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.762935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.410 [2024-10-01 13:44:02.762954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.762986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.763018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.763035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.763050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.763082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.766960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.767081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.767114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.410 [2024-10-01 13:44:02.767132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.767165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.767198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.767216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.767230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.767262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.773709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.773864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.773924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.410 [2024-10-01 13:44:02.773960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.774014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.774067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.774097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.774123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.775327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.777384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.777548] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.777612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.410 [2024-10-01 13:44:02.777647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.777702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.777755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.777786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.777811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.778141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.784870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.785020] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.785068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.410 [2024-10-01 13:44:02.785102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.785155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.785208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.785238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.785264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.785314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.788264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.788413] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.788470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.410 [2024-10-01 13:44:02.788504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.788608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.789770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.789814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.789845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.790132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.795974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.796122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.796178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.410 [2024-10-01 13:44:02.796212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.796266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.796320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.796350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.796374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.796426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.799260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.799407] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.799463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.410 [2024-10-01 13:44:02.799498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.799568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.799625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.799656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.799681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.799731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.806188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.806337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.806393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.410 [2024-10-01 13:44:02.806428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.806482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.806554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.806589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.806642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.806967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.810333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.810481] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.810550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.410 [2024-10-01 13:44:02.810587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.810641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.810693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.810724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.810753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.810803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.816973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.817124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.817181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.410 [2024-10-01 13:44:02.817222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.818384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.818720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.818763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.818791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.819955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.820772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.820921] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.820975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.410 [2024-10-01 13:44:02.821009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.821328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.821529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.821593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.821620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.821773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.828198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.828385] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.828441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.410 [2024-10-01 13:44:02.828473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.828530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.828607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.828635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.828658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.828711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.832923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.833087] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.833144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.410 [2024-10-01 13:44:02.833180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.834319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.835014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.835059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.835091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.835200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.839047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.410 [2024-10-01 13:44:02.839196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.410 [2024-10-01 13:44:02.839252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.410 [2024-10-01 13:44:02.839287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.410 [2024-10-01 13:44:02.839341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.410 [2024-10-01 13:44:02.839395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.410 [2024-10-01 13:44:02.839425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.410 [2024-10-01 13:44:02.839449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.410 [2024-10-01 13:44:02.839499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.410 [2024-10-01 13:44:02.843032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.843179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.843237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.411 [2024-10-01 13:44:02.843271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.843325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.844636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.844681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.844713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.844976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.849388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.849552] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.849609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.411 [2024-10-01 13:44:02.849645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.849701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.849754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.849786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.849811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.849861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.853689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.853838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.853890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.411 [2024-10-01 13:44:02.853925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.853979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.854031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.854062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.854088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.854138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.860373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.860596] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.860650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.411 [2024-10-01 13:44:02.860683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.860740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.860794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.860825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.860849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.862035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.864103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.864252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.864309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.411 [2024-10-01 13:44:02.864344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.864398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.864451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.864483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.864508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.864849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.871418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.871895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.871954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.411 [2024-10-01 13:44:02.871990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.872062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.872120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.872152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.872180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.872262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.875142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.875293] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.875352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.411 [2024-10-01 13:44:02.875388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.875443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.875496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.875527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.875575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.876757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.882902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.883079] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.883138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.411 [2024-10-01 13:44:02.883233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.883296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.883353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.883384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.883409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.883462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.886350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.886508] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.886569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.411 [2024-10-01 13:44:02.886603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.886659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.886713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.886744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.886769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.886822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.893568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.893734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.893793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.411 [2024-10-01 13:44:02.893829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.893885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.893938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.893969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.893994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.894333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.897806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.897958] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.898015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.411 [2024-10-01 13:44:02.898050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.898104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.898156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.898220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.898246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.898299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.903751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.904704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.904767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.411 [2024-10-01 13:44:02.904796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.904991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.905095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.905139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.905158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.906305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.908442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.908588] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.908632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.411 [2024-10-01 13:44:02.908655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.908702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.908737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.908756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.908771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.908804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.914444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.914581] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.914625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.411 [2024-10-01 13:44:02.914646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.914682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.914715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.914732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.914747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.914780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.919404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.919562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.919598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.411 [2024-10-01 13:44:02.919617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.919652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.919685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.919703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.919718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.919750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.924579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.924771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.924820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.411 [2024-10-01 13:44:02.924843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.924881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.924916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.924940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.924957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.926339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.929515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.929701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.929738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.411 [2024-10-01 13:44:02.929757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.930743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.931014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.931054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.931074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.931120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.934723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.934876] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.934911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.411 [2024-10-01 13:44:02.934930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.936135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.411 [2024-10-01 13:44:02.936388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.411 [2024-10-01 13:44:02.936426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.411 [2024-10-01 13:44:02.936445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.411 [2024-10-01 13:44:02.937577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.411 [2024-10-01 13:44:02.939644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.411 [2024-10-01 13:44:02.939763] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.411 [2024-10-01 13:44:02.939796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.411 [2024-10-01 13:44:02.939814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.411 [2024-10-01 13:44:02.939848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:02.939895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:02.939917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:02.939931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:02.939964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:02.945598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:02.945720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:02.945754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.412 [2024-10-01 13:44:02.945772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:02.945805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:02.945838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:02.945855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:02.945870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:02.945902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:02.950740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:02.950863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:02.950896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.412 [2024-10-01 13:44:02.950915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:02.952035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:02.952751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:02.952795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:02.952837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:02.952937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:02.956769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:02.956903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:02.956936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.412 [2024-10-01 13:44:02.956956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:02.956990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:02.957021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:02.957039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:02.957054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:02.957086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:02.960833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:02.960952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:02.960985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.412 [2024-10-01 13:44:02.961011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:02.961049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:02.961081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:02.961098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:02.961113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:02.962355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:02.967083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:02.967207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:02.967240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.412 [2024-10-01 13:44:02.967260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:02.967294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:02.967327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:02.967344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:02.967358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:02.967390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:02.971319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:02.971463] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:02.971497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.412 [2024-10-01 13:44:02.971516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:02.971567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:02.971603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:02.971621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:02.971636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:02.971669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:02.977928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:02.978051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:02.978085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.412 [2024-10-01 13:44:02.978115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:02.978165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:02.978201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:02.978219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:02.978233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:02.978266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:02.981452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:02.981583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:02.981617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.412 [2024-10-01 13:44:02.981636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:02.981670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:02.981702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:02.981722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:02.981736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:02.981778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:02.988918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:02.989040] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:02.989074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.412 [2024-10-01 13:44:02.989092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:02.989127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:02.989183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:02.989203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:02.989217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:02.989249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:02.992263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:02.992387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:02.992421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.412 [2024-10-01 13:44:02.992439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:02.992473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:02.992506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:02.992523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:02.992554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:02.992590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:03.000114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:03.000270] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:03.000306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.412 [2024-10-01 13:44:03.000325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:03.000360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:03.000393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:03.000411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:03.000429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:03.000471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:03.003381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:03.003500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:03.003552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.412 [2024-10-01 13:44:03.003578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:03.003613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:03.003647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:03.003665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:03.003680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:03.003736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:03.010290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:03.010413] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:03.010447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.412 [2024-10-01 13:44:03.010466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:03.010500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:03.010547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:03.010569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:03.010584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:03.010636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:03.014480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:03.014614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:03.014648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.412 [2024-10-01 13:44:03.014667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:03.014717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:03.014754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:03.014772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:03.014787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:03.014818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:03.021238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:03.021365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:03.021399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.412 [2024-10-01 13:44:03.021418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:03.021452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:03.021485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:03.021502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:03.021516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:03.021567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:03.025403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:03.025524] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:03.025574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.412 [2024-10-01 13:44:03.025616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:03.025653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:03.025686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:03.025705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:03.025719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:03.025751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:03.031361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:03.031565] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:03.031602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.412 [2024-10-01 13:44:03.031622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:03.031660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:03.031693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:03.031712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:03.031728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:03.031761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:03.035553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:03.035720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.412 [2024-10-01 13:44:03.035756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.412 [2024-10-01 13:44:03.035775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.412 [2024-10-01 13:44:03.037054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.412 [2024-10-01 13:44:03.037278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.412 [2024-10-01 13:44:03.037312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.412 [2024-10-01 13:44:03.037330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.412 [2024-10-01 13:44:03.037367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.412 [2024-10-01 13:44:03.042033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.412 [2024-10-01 13:44:03.042211] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.042248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.413 [2024-10-01 13:44:03.042268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.042305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.042355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.042400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.042417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.042452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.046314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.046448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.046482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.413 [2024-10-01 13:44:03.046501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.046551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.046588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.046607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.046622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.046654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.052929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.053069] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.053103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.413 [2024-10-01 13:44:03.053121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.053156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.053188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.053215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.053234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.053268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.056491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.056635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.056673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.413 [2024-10-01 13:44:03.056692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.056729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.056783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.056805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.056820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.056853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.063862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.064000] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.064034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.413 [2024-10-01 13:44:03.064054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.064100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.064135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.064157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.064180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.064216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.067168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.067301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.067335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.413 [2024-10-01 13:44:03.067355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.067400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.067434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.067452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.067466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.067498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.074877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.075009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.075052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.413 [2024-10-01 13:44:03.075074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.075109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.075144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.075171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.075187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.075221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.078057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.078202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.078237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.413 [2024-10-01 13:44:03.078256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.078325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.078359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.078377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.078392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.078424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.084995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.085124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.085157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.413 [2024-10-01 13:44:03.085176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.085210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.085244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.085261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.085276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.085308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.089099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.089222] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.089255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.413 [2024-10-01 13:44:03.089274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.089308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.089340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.089359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.089374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.089406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.095547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.095677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.095713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.413 [2024-10-01 13:44:03.095732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.095767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.095800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.095817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.095856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.095907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.099201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.099328] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.099362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.413 [2024-10-01 13:44:03.099380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.099415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.099700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.099749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.099768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.099929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.106351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.106473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.106507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.413 [2024-10-01 13:44:03.106524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.106575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.106610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.106628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.106641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.106675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.109657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.109785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.109819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.413 [2024-10-01 13:44:03.109838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.109872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.109904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.109922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.109936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.109969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.117342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.117492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.117527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.413 [2024-10-01 13:44:03.117564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.117601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.117634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.117652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.117667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.117700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.120663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.120783] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.120823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.413 [2024-10-01 13:44:03.120842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.120877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.120909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.120927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.120941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.120973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.127475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.127615] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.127649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.413 [2024-10-01 13:44:03.127668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.127702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.127735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.127753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.127767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.127800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.131586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.131711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.131744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.413 [2024-10-01 13:44:03.131763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.413 [2024-10-01 13:44:03.131823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.413 [2024-10-01 13:44:03.131858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.413 [2024-10-01 13:44:03.131889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.413 [2024-10-01 13:44:03.131908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.413 [2024-10-01 13:44:03.131941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.413 [2024-10-01 13:44:03.138190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.413 [2024-10-01 13:44:03.138316] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.413 [2024-10-01 13:44:03.138350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.413 [2024-10-01 13:44:03.138369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.138403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.138434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.138451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.138465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.138497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.142169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.142300] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.142334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.414 [2024-10-01 13:44:03.142352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.142386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.142435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.142456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.142471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.142503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.148293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.148414] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.148448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.414 [2024-10-01 13:44:03.148466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.148501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.149472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.149515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.149548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.149774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.153022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.153151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.153186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.414 [2024-10-01 13:44:03.153205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.153239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.153272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.153289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.153304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.153336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.160677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.160807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.160841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.414 [2024-10-01 13:44:03.160860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.160897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.160930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.160947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.160961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.160994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.163890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.164011] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.164050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.414 [2024-10-01 13:44:03.164069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.164104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.164143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.164161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.164176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.164207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.170783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.170910] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.170944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.414 [2024-10-01 13:44:03.170996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.171033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.171066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.171084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.171107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.171376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.174950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.175114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.175149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.414 [2024-10-01 13:44:03.175168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.175204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.175237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.175256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.175272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.175304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.181484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.181622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.181656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.414 [2024-10-01 13:44:03.181675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.181709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.181743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.181760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.181775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.181807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.185075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.185194] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.185227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.414 [2024-10-01 13:44:03.185246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.185280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.185338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.185359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.185374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.185674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.192433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.192570] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.192605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.414 [2024-10-01 13:44:03.192630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.192669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.192702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.192720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.192735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.192768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.195733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.195852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.195902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.414 [2024-10-01 13:44:03.195923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.195957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.195990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.196007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.196022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.196054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.203463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.203607] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.203642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.414 [2024-10-01 13:44:03.203661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.203696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.203741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.203760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.203775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.203808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.206624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.206743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.206782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.414 [2024-10-01 13:44:03.206802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.206836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.206868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.206886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.206900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.206932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.213585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.213734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.213770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.414 [2024-10-01 13:44:03.213789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.213825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.213859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.213878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.213892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.214166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.217659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.217788] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.217822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.414 [2024-10-01 13:44:03.217841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.217876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.217909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.217927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.217941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.217973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.224124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.224246] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.224281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.414 [2024-10-01 13:44:03.224343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.224383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.224416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.224435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.224449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.224483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.227760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.227895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.227931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.414 [2024-10-01 13:44:03.227949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.227984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.228017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.228035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.228049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.228313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.234950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.235074] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.235108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.414 [2024-10-01 13:44:03.235126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.414 [2024-10-01 13:44:03.235160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.414 [2024-10-01 13:44:03.235192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.414 [2024-10-01 13:44:03.235210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.414 [2024-10-01 13:44:03.235224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.414 [2024-10-01 13:44:03.235255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.414 [2024-10-01 13:44:03.238222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.414 [2024-10-01 13:44:03.238343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.414 [2024-10-01 13:44:03.238376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.415 [2024-10-01 13:44:03.238395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.238429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.238461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.238479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.238511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.238563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.245882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.246006] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.246039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.415 [2024-10-01 13:44:03.246058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.246091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.246124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.246157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.246176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.246209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.249035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.249154] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.249187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.415 [2024-10-01 13:44:03.249205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.249238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.249272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.249290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.249304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.249335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.255985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.256108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.256149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.415 [2024-10-01 13:44:03.256168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.256201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.256233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.256250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.256265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.256554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.259994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.260141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.260176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.415 [2024-10-01 13:44:03.260195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.260229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.260261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.260280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.260294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.260326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.266403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.266529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.266577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.415 [2024-10-01 13:44:03.266597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.266632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.266665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.266683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.266696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.266728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.270144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.270264] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.270297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.415 [2024-10-01 13:44:03.270315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.270348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.270381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.270399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.270414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.270445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.277563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.277697] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.277730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.415 [2024-10-01 13:44:03.277748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.277806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.277841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.277859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.277875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.277908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.280951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.281098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.281133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.415 [2024-10-01 13:44:03.281151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.281187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.281220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.281238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.281253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.282372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.288634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.288801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.288839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.415 [2024-10-01 13:44:03.288858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.288912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.288959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.288986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.289002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.289036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.291826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.291962] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.291996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.415 [2024-10-01 13:44:03.292015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.292055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.292088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.292106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.292120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.292177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.298756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.298895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.298930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.415 [2024-10-01 13:44:03.298949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.298984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.299255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.299295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.299314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.299448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.302812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.302936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.302969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.415 [2024-10-01 13:44:03.302988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.303021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.303053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.303072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.303086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.303118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.309278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.309405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.309439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.415 [2024-10-01 13:44:03.309458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.309492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.309525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.309559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.309575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.309609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.312916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.313037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.313097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.415 [2024-10-01 13:44:03.313118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.313153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.313186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.313204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.313219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.313491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.320252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.320374] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.320407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.415 [2024-10-01 13:44:03.320425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.320459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.320491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.320508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.320523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.320572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.323552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.323670] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.323703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.415 [2024-10-01 13:44:03.323721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.323755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.323787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.323805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.323819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.323858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.331295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.331418] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.331452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.415 [2024-10-01 13:44:03.331470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.331504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.331575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.331597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.331611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.331644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.334522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.334664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.334697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.415 [2024-10-01 13:44:03.334715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.334749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.334790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.334811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.415 [2024-10-01 13:44:03.334825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.415 [2024-10-01 13:44:03.334857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.415 [2024-10-01 13:44:03.341507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.415 [2024-10-01 13:44:03.341647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.415 [2024-10-01 13:44:03.341686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.415 [2024-10-01 13:44:03.341712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.415 [2024-10-01 13:44:03.341748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.415 [2024-10-01 13:44:03.341782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.415 [2024-10-01 13:44:03.341799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.341814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.341846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.345674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.345797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.345830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.416 [2024-10-01 13:44:03.345848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.345882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.345914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.345932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.345946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.345978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.352463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.352604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.352639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.416 [2024-10-01 13:44:03.352658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.352692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.352733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.352751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.352765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.352798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.356082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.356203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.356237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.416 [2024-10-01 13:44:03.356255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.356289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.356323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.356341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.356356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.356388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.363716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.363856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.363907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.416 [2024-10-01 13:44:03.363928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.363965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.363998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.364015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.364034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.364088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.367263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.367394] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.367429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.416 [2024-10-01 13:44:03.367471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.367509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.367569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.367612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.367633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.367670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.375278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.375410] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.375445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.416 [2024-10-01 13:44:03.375464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.375499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.375547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.375569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.375584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.375618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.377365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.377488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.377522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.416 [2024-10-01 13:44:03.377557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.378494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.378728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.378762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.378780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.378824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.385809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.385946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.385981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.416 [2024-10-01 13:44:03.386000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.386034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.386067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.386085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.386125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.386161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.387462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.387592] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.387625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.416 [2024-10-01 13:44:03.387644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.389015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.389978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.390019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.390038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.390184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.396702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.397065] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.397114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.416 [2024-10-01 13:44:03.397137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.397225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.397262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.397281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.397298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.397332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.398736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.398854] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.398886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.416 [2024-10-01 13:44:03.398905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.400017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.400683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.400731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.400751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.400842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.406905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.407066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.407101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.416 [2024-10-01 13:44:03.407120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.407155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.407188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.407205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.407219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.407253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.408835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.408961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.408995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.416 [2024-10-01 13:44:03.409014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.409048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.409080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.409098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.409112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.409144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.417034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.417161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.417195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.416 [2024-10-01 13:44:03.417214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.417248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.417280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.417297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.417311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.417343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.420216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.420354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.420387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.416 [2024-10-01 13:44:03.420405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.420458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.420493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.420511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.420526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.420576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.427237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.427369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.416 [2024-10-01 13:44:03.427403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.416 [2024-10-01 13:44:03.427423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.416 [2024-10-01 13:44:03.427458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.416 [2024-10-01 13:44:03.427490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.416 [2024-10-01 13:44:03.427507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.416 [2024-10-01 13:44:03.427522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.416 [2024-10-01 13:44:03.427571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.416 [2024-10-01 13:44:03.430883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.416 [2024-10-01 13:44:03.431007] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.431041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.417 [2024-10-01 13:44:03.431059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.431093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.431125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.431144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.431159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.431191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.437348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.437478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.437512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.417 [2024-10-01 13:44:03.437532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.438489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.438737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.438776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.438816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.438865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.441038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.441929] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.441978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.417 [2024-10-01 13:44:03.442000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.442193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.442303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.442329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.442344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.442378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.447455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.447600] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.447638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.417 [2024-10-01 13:44:03.447657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.447693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.447745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.447767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.447782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.447815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.451142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.451273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.451312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.417 [2024-10-01 13:44:03.451332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.452109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.452352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.452390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.452408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.452452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.457577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.457713] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.457776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.417 [2024-10-01 13:44:03.457799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.458913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.459153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.459195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.459225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.460432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.461247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.461372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.461407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.417 [2024-10-01 13:44:03.461425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.461707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.461895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.461934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.461952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.462066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.468674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.468816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.468864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.417 [2024-10-01 13:44:03.468885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.468920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.468953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.468971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.468985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.469018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.472111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.472252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.472287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.417 [2024-10-01 13:44:03.472306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.472340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.472408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.472431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.472446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.472479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.480102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.480305] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.480345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.417 [2024-10-01 13:44:03.480366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.480405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.480438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.480456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.480472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.480506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.482224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.483276] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.483329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.417 [2024-10-01 13:44:03.483351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.483565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.483618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.483639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.483654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.483688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.490560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.490696] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.490742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.417 [2024-10-01 13:44:03.490761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.490795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.490828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.490846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.490861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.490894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.492329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.492460] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.492504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.417 [2024-10-01 13:44:03.492549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.493909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.494886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.494930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.494949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.495097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.501491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.501772] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.501822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.417 [2024-10-01 13:44:03.501847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.501930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.501966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.501984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.501998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.502034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.503565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.504791] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.504840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.417 [2024-10-01 13:44:03.504861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.505589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.505705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.505740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.505758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.505799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.511606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.511742] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.511789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.417 [2024-10-01 13:44:03.511837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.511895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.511932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.511951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.511965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.512925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.513659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.513777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.513820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.417 [2024-10-01 13:44:03.513841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.513888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.513923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.513941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.513956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.513988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.521708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.521835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.521879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.417 [2024-10-01 13:44:03.521900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.523270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.524283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.524327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.524346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.524485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.524531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.524646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.524694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.417 [2024-10-01 13:44:03.524716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.524751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.524783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.417 [2024-10-01 13:44:03.524819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.417 [2024-10-01 13:44:03.524834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.417 [2024-10-01 13:44:03.524867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.417 [2024-10-01 13:44:03.532743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.417 [2024-10-01 13:44:03.532885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.417 [2024-10-01 13:44:03.532930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.417 [2024-10-01 13:44:03.532951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.417 [2024-10-01 13:44:03.534051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.417 [2024-10-01 13:44:03.534742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.534784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.534803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.534914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.534962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.535057] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.535099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.418 [2024-10-01 13:44:03.535121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.535399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.535582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.535617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.535634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.535747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.542857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.542984] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.543027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.418 [2024-10-01 13:44:03.543047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.543081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.543114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.543142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.543168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.543213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.545666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.545810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.545845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.418 [2024-10-01 13:44:03.545863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.545897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.545930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.545948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.545962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.545994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.553491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.553628] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.553672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.418 [2024-10-01 13:44:03.553693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.553728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.553760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.553778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.553792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.553826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.556790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.556909] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.556951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.418 [2024-10-01 13:44:03.556972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.557006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.557038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.557056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.557070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.557101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.563801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.563937] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.563974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.418 [2024-10-01 13:44:03.563993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.564051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.564085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.564103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.564118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.564167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.568030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.568172] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.568217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.418 [2024-10-01 13:44:03.568238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.568275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.568308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.568326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.568340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.568373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.574640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.574764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.574806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.418 [2024-10-01 13:44:03.574827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.574861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.574894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.574912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.574926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.574958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.578227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.578366] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.578399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.418 [2024-10-01 13:44:03.578418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.578453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.578486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.578504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.578568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.578608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.585740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.585872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.585908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.418 [2024-10-01 13:44:03.585928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.585963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.585996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.586014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.586029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.586062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.589059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.589194] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.589239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.418 [2024-10-01 13:44:03.589261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.589297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.589330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.589348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.589362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.589394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.596719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.596844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.596888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.418 [2024-10-01 13:44:03.596908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.596943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.596975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.596992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.597007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.597039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.599946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.600071] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.600136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.418 [2024-10-01 13:44:03.600158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.600194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.600227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.600245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.600259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.600291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.606816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.606939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.606982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.418 [2024-10-01 13:44:03.607003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.607038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.607070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.607088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.607102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.607142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.610943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.611065] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.611107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.418 [2024-10-01 13:44:03.611128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.611162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.611195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.611213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.611227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.611259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.617515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.617724] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.617761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.418 [2024-10-01 13:44:03.617780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.617819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.617892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.617912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.617927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.619069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.621277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.418 [2024-10-01 13:44:03.621439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.418 [2024-10-01 13:44:03.621486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.418 [2024-10-01 13:44:03.621508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.418 [2024-10-01 13:44:03.621561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.418 [2024-10-01 13:44:03.621597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.418 [2024-10-01 13:44:03.621616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.418 [2024-10-01 13:44:03.621632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.418 [2024-10-01 13:44:03.621903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.418 [2024-10-01 13:44:03.627669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.627792] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.627827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.419 [2024-10-01 13:44:03.627846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.628818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.629068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.629109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.629134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.629193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.632277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.632453] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.632496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.419 [2024-10-01 13:44:03.632518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.632566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.632602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.632620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.632634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.632666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.640079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.640203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.640238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.419 [2024-10-01 13:44:03.640257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.640290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.640323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.640341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.640355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.640388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.643325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.643441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.643482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.419 [2024-10-01 13:44:03.643503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.643550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.643587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.643606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.643620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.643652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.650251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.650379] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.650424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.419 [2024-10-01 13:44:03.650445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.650480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.650513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.650531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.650567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.650602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.654635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.654798] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.654847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.419 [2024-10-01 13:44:03.654890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.654929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.654975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.654993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.655007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.655047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.661124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.661305] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.661371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.419 [2024-10-01 13:44:03.661407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.662579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.662845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.662884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.662904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.664009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.664774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.664894] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.664936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.419 [2024-10-01 13:44:03.664957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.664992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.665026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.665055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.665076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.665350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 8586.45 IOPS, 33.54 MiB/s [2024-10-01 13:44:03.672259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.672400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.672442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.419 [2024-10-01 13:44:03.672462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.672498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.672531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.672595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.672613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.672649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.675661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.675790] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.675824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.419 [2024-10-01 13:44:03.675844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.675890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.675927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.675945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.675960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.675993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.683562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.683690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.683724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.419 [2024-10-01 13:44:03.683743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.683785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.683820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.683838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.683853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.683911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.686663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.686957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.687003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.419 [2024-10-01 13:44:03.687026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.687069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.687105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.687123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.687137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.687170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.694765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.695127] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.695193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.419 [2024-10-01 13:44:03.695228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.695389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.695480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.695512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.695558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.695618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.698371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.698501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.698558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.419 [2024-10-01 13:44:03.698582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.698618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.698652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.698670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.698685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.698717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.704942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.705077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.705111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.419 [2024-10-01 13:44:03.705139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.705189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.705236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.705256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.705271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.705305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.708567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.708706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.708742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.419 [2024-10-01 13:44:03.708761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.708822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.708857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.708875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.708890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.708922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.715053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.716142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.716190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.419 [2024-10-01 13:44:03.716212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.716407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.716468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.716490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.716506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.716556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.719596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.719757] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.719799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.419 [2024-10-01 13:44:03.719819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.719854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.719897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.719918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.719932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.719964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.727366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.727492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.727525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.419 [2024-10-01 13:44:03.727563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.727600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.727632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.727650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.727693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.419 [2024-10-01 13:44:03.727729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.419 [2024-10-01 13:44:03.730644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.419 [2024-10-01 13:44:03.730771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.419 [2024-10-01 13:44:03.730805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.419 [2024-10-01 13:44:03.730824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.419 [2024-10-01 13:44:03.730858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.419 [2024-10-01 13:44:03.730891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.419 [2024-10-01 13:44:03.730908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.419 [2024-10-01 13:44:03.730923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.730954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.737531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.737667] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.737700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.420 [2024-10-01 13:44:03.737719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.737752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.737785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.737803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.737818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.737851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.741717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.741836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.741868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.420 [2024-10-01 13:44:03.741887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.741920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.741953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.741971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.741986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.742018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.748346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.748476] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.748549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.420 [2024-10-01 13:44:03.748573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.748609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.748643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.748661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.748675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.748708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.751960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.752117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.752164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.420 [2024-10-01 13:44:03.752185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.752221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.752254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.752272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.752288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.752320] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.759347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.759482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.759515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.420 [2024-10-01 13:44:03.759550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.759590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.759623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.759641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.759655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.759689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.762763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.762880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.762912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.420 [2024-10-01 13:44:03.762931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.762965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.763022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.763042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.763056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.763088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.770884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.771036] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.771071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.420 [2024-10-01 13:44:03.771101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.771136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.771169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.771194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.771221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.771270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.772856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.772982] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.773018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.420 [2024-10-01 13:44:03.773037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.774005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.774229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.774267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.774286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.774330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.781155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.781286] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.781321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.420 [2024-10-01 13:44:03.781341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.781375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.781408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.781425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.781440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.781503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.785331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.785455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.785489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.420 [2024-10-01 13:44:03.785508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.785557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.785594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.785612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.785627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.785659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.791933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.792059] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.792093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.420 [2024-10-01 13:44:03.792111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.792145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.792178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.792195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.792210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.792243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.795508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.795666] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.795700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.420 [2024-10-01 13:44:03.795719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.795753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.795786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.795804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.795818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.795850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.802989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.803142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.803177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.420 [2024-10-01 13:44:03.803223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.803262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.803295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.803313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.803327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.803361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.806385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.806521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.806570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.420 [2024-10-01 13:44:03.806590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.806626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.806666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.806686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.806701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.806733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.814230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.814401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.814442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.420 [2024-10-01 13:44:03.814464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.814500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.814547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.814568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.814584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.814618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.816490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.817566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.817613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.420 [2024-10-01 13:44:03.817634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.817842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.817892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.817940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.817956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.817992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.824702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.824833] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.824868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.420 [2024-10-01 13:44:03.824887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.824921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.824954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.824972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.824986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.825019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.828894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.829016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.829050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.420 [2024-10-01 13:44:03.829072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.829114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.829148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.829166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.829181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.829213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.835445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.835586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.420 [2024-10-01 13:44:03.835621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.420 [2024-10-01 13:44:03.835640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.420 [2024-10-01 13:44:03.835675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.420 [2024-10-01 13:44:03.835709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.420 [2024-10-01 13:44:03.835734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.420 [2024-10-01 13:44:03.835750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.420 [2024-10-01 13:44:03.835784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.420 [2024-10-01 13:44:03.839071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.420 [2024-10-01 13:44:03.839194] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.839228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.421 [2024-10-01 13:44:03.839247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.839281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.839313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.839331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.839346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.839382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.846574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.846704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.846738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.421 [2024-10-01 13:44:03.846757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.846792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.846825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.846843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.846858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.846891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.849991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.850116] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.850150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.421 [2024-10-01 13:44:03.850168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.850201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.850235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.850252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.850267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.850299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.857654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.857786] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.857826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.421 [2024-10-01 13:44:03.857845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.857905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.857939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.857956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.857970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.858003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.860979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.861102] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.861140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.421 [2024-10-01 13:44:03.861167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.861203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.861236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.861254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.861268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.861301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.868109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.868323] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.868360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.421 [2024-10-01 13:44:03.868380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.868416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.868450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.868468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.868484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.868518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.872380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.872548] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.872585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.421 [2024-10-01 13:44:03.872604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.872639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.872673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.872691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.872735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.872771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.879117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.879296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.879333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.421 [2024-10-01 13:44:03.879352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.879402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.879437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.879455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.879470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.879505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.882799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.882920] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.882954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.421 [2024-10-01 13:44:03.882974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.883007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.883040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.883058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.883073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.883105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.889269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.890322] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.890368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.421 [2024-10-01 13:44:03.890390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.890634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.890697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.890719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.890734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.890769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.893869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.893995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.894063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.421 [2024-10-01 13:44:03.894086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.894121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.894154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.894172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.894186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.895296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.901633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.901766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.901812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.421 [2024-10-01 13:44:03.901833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.901868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.901901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.901918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.901932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.901965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.904899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.905019] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.905061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.421 [2024-10-01 13:44:03.905083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.905118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.905152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.905169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.905183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.905215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.911857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.911998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.912034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.421 [2024-10-01 13:44:03.912054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.912097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.912152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.912172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.912186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.912450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.916056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.916179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.916218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.421 [2024-10-01 13:44:03.916237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.916271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.916303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.916322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.916336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.916367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.922612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.922743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.922784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.421 [2024-10-01 13:44:03.922804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.922838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.922871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.922889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.922911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.922945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.926221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.926353] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.926398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.421 [2024-10-01 13:44:03.926419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.926454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.926487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.926515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.926531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.926609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.933828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.934174] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.421 [2024-10-01 13:44:03.934228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.421 [2024-10-01 13:44:03.934250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.421 [2024-10-01 13:44:03.934297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.421 [2024-10-01 13:44:03.934334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.421 [2024-10-01 13:44:03.934352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.421 [2024-10-01 13:44:03.934366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.421 [2024-10-01 13:44:03.934402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.421 [2024-10-01 13:44:03.937270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.421 [2024-10-01 13:44:03.937395] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:03.937445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.422 [2024-10-01 13:44:03.937466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:03.937501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:03.937549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:03.937571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:03.937586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:03.937620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:03.945104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:03.945252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:03.945288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.422 [2024-10-01 13:44:03.945307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:03.945342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:03.945376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:03.945394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:03.945408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:03.945442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:03.948332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:03.948451] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:03.948484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.422 [2024-10-01 13:44:03.948549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:03.948591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:03.948625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:03.948645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:03.948660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:03.948692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:03.955235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:03.955364] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:03.955399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.422 [2024-10-01 13:44:03.955419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:03.955453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:03.955486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:03.955504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:03.955518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:03.955576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:03.959411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:03.959548] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:03.959585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.422 [2024-10-01 13:44:03.959605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:03.959640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:03.959673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:03.959691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:03.959706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:03.959738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:03.966025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:03.966166] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:03.966211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.422 [2024-10-01 13:44:03.966232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:03.966268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:03.966301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:03.966342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:03.966357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:03.966392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:03.969656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:03.969776] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:03.969810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.422 [2024-10-01 13:44:03.969829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:03.969862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:03.969895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:03.969912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:03.969927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:03.969959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:03.977081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:03.977217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:03.977262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.422 [2024-10-01 13:44:03.977283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:03.977318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:03.977352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:03.977369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:03.977384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:03.977417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:03.980506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:03.980639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:03.980686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.422 [2024-10-01 13:44:03.980707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:03.980741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:03.980773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:03.980792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:03.980806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:03.980838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:03.988287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:03.988422] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:03.988457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.422 [2024-10-01 13:44:03.988477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:03.988512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:03.988562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:03.988582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:03.988597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:03.988630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:03.991578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:03.991720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:03.991763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.422 [2024-10-01 13:44:03.991793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:03.991831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:03.991865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:03.991898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:03.991914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:03.991949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:03.998611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:03.998741] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:03.998776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.422 [2024-10-01 13:44:03.998795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:03.998829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:03.998863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:03.998880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:03.998895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:03.998928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:04.002831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:04.002956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:04.002991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.422 [2024-10-01 13:44:04.003010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:04.003068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:04.003116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:04.003140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:04.003155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:04.003193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:04.009556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:04.009681] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:04.009716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.422 [2024-10-01 13:44:04.009736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:04.009770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:04.009807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:04.009834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:04.009849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:04.009910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:04.013095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:04.013222] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:04.013255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.422 [2024-10-01 13:44:04.013273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:04.013307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:04.013339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:04.013357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:04.013371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:04.013403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:04.020721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:04.020902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:04.020939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.422 [2024-10-01 13:44:04.020958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:04.020994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:04.021027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:04.021046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:04.021094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:04.021132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:04.024192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:04.024320] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:04.024366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.422 [2024-10-01 13:44:04.024387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:04.024436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:04.024471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:04.024489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:04.024504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:04.024550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:04.032132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:04.032258] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:04.032291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.422 [2024-10-01 13:44:04.032309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:04.032343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:04.032380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:04.032411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:04.032429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:04.032463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:04.035377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:04.035505] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:04.035562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.422 [2024-10-01 13:44:04.035590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:04.035628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:04.035660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:04.035686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:04.035702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.422 [2024-10-01 13:44:04.035736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.422 [2024-10-01 13:44:04.042245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.422 [2024-10-01 13:44:04.042377] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.422 [2024-10-01 13:44:04.042446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.422 [2024-10-01 13:44:04.042470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.422 [2024-10-01 13:44:04.042506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.422 [2024-10-01 13:44:04.042789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.422 [2024-10-01 13:44:04.042835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.422 [2024-10-01 13:44:04.042854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.043009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.046322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.046445] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.046481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.423 [2024-10-01 13:44:04.046499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.046549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.046586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.046604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.046619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.046651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.052771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.052906] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.052954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.423 [2024-10-01 13:44:04.052975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.053010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.053044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.053061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.053076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.054184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.056424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.056560] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.056599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.423 [2024-10-01 13:44:04.056619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.056896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.057088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.057126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.057145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.057263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.063530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.063677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.063713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.423 [2024-10-01 13:44:04.063733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.063773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.063811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.063829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.063843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.063888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.066833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.066962] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.067010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.423 [2024-10-01 13:44:04.067032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.067066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.067098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.067116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.067131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.068276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.074420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.074560] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.074596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.423 [2024-10-01 13:44:04.074623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.074659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.074693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.074711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.074726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.074780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.077718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.077838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.077872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.423 [2024-10-01 13:44:04.077891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.077925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.077957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.077975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.077989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.078021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.084684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.084804] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.084847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.423 [2024-10-01 13:44:04.084868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.084902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.084934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.084951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.084966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.084998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.088768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.088884] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.088932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.423 [2024-10-01 13:44:04.088952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.088986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.089019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.089037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.089052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.089083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.095521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.095652] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.095695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.423 [2024-10-01 13:44:04.095738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.095775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.095808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.095826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.095840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.095872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.099107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.099225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.099259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.423 [2024-10-01 13:44:04.099278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.099311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.099344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.099361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.099376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.099408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.106381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.106499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.106531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.423 [2024-10-01 13:44:04.106568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.106603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.106636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.106654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.106668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.106700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.109693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.109809] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.109847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.423 [2024-10-01 13:44:04.109867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.109900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.109932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.109969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.109984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.110018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.117285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.117405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.117448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.423 [2024-10-01 13:44:04.117468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.117502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.117548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.117569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.117584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.117617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.120500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.120625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.120658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.423 [2024-10-01 13:44:04.120676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.120709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.120742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.120759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.120773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.120805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.127378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.127494] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.127555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.423 [2024-10-01 13:44:04.127577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.127610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.127643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.127660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.127674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.127946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.131428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.131562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.131597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.423 [2024-10-01 13:44:04.131615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.131649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.131681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.131700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.131714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.131746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.138038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.138164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.423 [2024-10-01 13:44:04.138200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.423 [2024-10-01 13:44:04.138218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.423 [2024-10-01 13:44:04.138252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.423 [2024-10-01 13:44:04.138285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.423 [2024-10-01 13:44:04.138302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.423 [2024-10-01 13:44:04.138317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.423 [2024-10-01 13:44:04.138349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.423 [2024-10-01 13:44:04.141723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.423 [2024-10-01 13:44:04.141845] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.141901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.424 [2024-10-01 13:44:04.141921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.141955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.141988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.142005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.142020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.142052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.149024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.149305] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.149351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.424 [2024-10-01 13:44:04.149372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.149446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.149483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.149501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.149516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.149563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.152527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.152657] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.152708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.424 [2024-10-01 13:44:04.152728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.152763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.152795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.152813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.152827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.152858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.160116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.160232] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.160280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.424 [2024-10-01 13:44:04.160301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.160334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.160367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.160384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.160398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.160430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.163318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.163432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.163478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.424 [2024-10-01 13:44:04.163498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.163532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.163583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.163602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.163632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.163666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.170213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.170330] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.170378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.424 [2024-10-01 13:44:04.170399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.170433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.170465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.170483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.170497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.170773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.174232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.174347] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.174389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.424 [2024-10-01 13:44:04.174410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.174443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.174474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.174492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.174506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.174553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.180690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.180807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.180852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.424 [2024-10-01 13:44:04.180872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.180906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.180938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.180955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.180970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.181002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.184325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.184459] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.184508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.424 [2024-10-01 13:44:04.184528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.184578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.184612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.184630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.184644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.184904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.191523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.191654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.191700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.424 [2024-10-01 13:44:04.191720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.191754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.191786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.191804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.191818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.191850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.194834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.194950] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.194989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.424 [2024-10-01 13:44:04.195008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.195041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.195074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.195091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.195106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.195137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.202465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.202602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.202651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.424 [2024-10-01 13:44:04.202671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.202706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.202760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.202780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.202794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.202826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.205694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.205809] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.205848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.424 [2024-10-01 13:44:04.205867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.205901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.205933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.205951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.205965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.205996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.212582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.212701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.212734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.424 [2024-10-01 13:44:04.212752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.212785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.212817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.212834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.212850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.212882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.216575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.216689] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.216735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.424 [2024-10-01 13:44:04.216756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.216789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.216822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.216840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.216854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.216908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.223093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.223211] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.223258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.424 [2024-10-01 13:44:04.223279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.223313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.223345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.223362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.223377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.223409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.226664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.226778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.226828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.424 [2024-10-01 13:44:04.226848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.226881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.226913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.226931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.226946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.227210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.233971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.234086] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.234119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.424 [2024-10-01 13:44:04.234137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.234170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.234202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.234219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.234234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.234265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.237421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.237551] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.237586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.424 [2024-10-01 13:44:04.237625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.237662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.237696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.237713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.424 [2024-10-01 13:44:04.237728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.424 [2024-10-01 13:44:04.237758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.424 [2024-10-01 13:44:04.245207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.424 [2024-10-01 13:44:04.245334] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.424 [2024-10-01 13:44:04.245374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.424 [2024-10-01 13:44:04.245393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.424 [2024-10-01 13:44:04.245427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.424 [2024-10-01 13:44:04.245460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.424 [2024-10-01 13:44:04.245477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.245492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.245524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.248561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.248682] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.248723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.425 [2024-10-01 13:44:04.248743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.248778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.248810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.248828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.248843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.248874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.255458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.255600] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.255648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.425 [2024-10-01 13:44:04.255669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.255703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.255736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.255778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.255794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.255828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.259639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.259755] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.259809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.425 [2024-10-01 13:44:04.259829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.259863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.259908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.259927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.259942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.259973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.266132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.266256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.266300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.425 [2024-10-01 13:44:04.266321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.266355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.266388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.266405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.266420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.266451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.269734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.269849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.269893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.425 [2024-10-01 13:44:04.269914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.269948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.269980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.269997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.270011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.270043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.276948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.277066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.277110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.425 [2024-10-01 13:44:04.277130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.277164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.277196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.277213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.277228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.277260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.280302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.280417] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.280466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.425 [2024-10-01 13:44:04.280486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.280519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.280569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.280589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.280603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.280634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.288088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.288206] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.288252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.425 [2024-10-01 13:44:04.288272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.288306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.288338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.288356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.288370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.288402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.291332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.291447] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.291495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.425 [2024-10-01 13:44:04.291516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.291588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.291623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.291641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.291656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.291688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.298184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.298303] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.298335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.425 [2024-10-01 13:44:04.298354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.298387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.298419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.298437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.298451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.298483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.302296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.302412] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.302463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.425 [2024-10-01 13:44:04.302483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.302517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.302568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.302589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.302603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.302635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.308833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.308951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.308998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.425 [2024-10-01 13:44:04.309018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.309052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.309085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.309102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.309136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.309171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.312394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.312509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.312567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.425 [2024-10-01 13:44:04.312590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.312625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.312658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.312675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.312690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.312722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.319782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.319964] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.320000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.425 [2024-10-01 13:44:04.320019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.320054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.320088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.320105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.320121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.320154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.323157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.323278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.323316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.425 [2024-10-01 13:44:04.323336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.323369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.323401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.323419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.323434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.323466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.330913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.331121] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.331158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.425 [2024-10-01 13:44:04.331177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.331213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.331247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.331265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.331281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.331314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.334307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.334425] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.334482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.425 [2024-10-01 13:44:04.334503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.334553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.334590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.334609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.334623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.334654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.341386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.341505] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.425 [2024-10-01 13:44:04.341552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.425 [2024-10-01 13:44:04.341574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.425 [2024-10-01 13:44:04.341608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.425 [2024-10-01 13:44:04.341641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.425 [2024-10-01 13:44:04.341659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.425 [2024-10-01 13:44:04.341673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.425 [2024-10-01 13:44:04.341706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.425 [2024-10-01 13:44:04.345516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.425 [2024-10-01 13:44:04.345639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.345693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.426 [2024-10-01 13:44:04.345713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.345746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.345796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.345815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.345830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.345860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.352264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.352389] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.352433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.426 [2024-10-01 13:44:04.352453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.352487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.352520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.352554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.352571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.352604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.355920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.356035] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.356078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.426 [2024-10-01 13:44:04.356106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.356140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.356173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.356190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.356205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.356237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.363409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.363526] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.363584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.426 [2024-10-01 13:44:04.363605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.363639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.363672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.363689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.363704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.363756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.366900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.367017] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.367064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.426 [2024-10-01 13:44:04.367084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.367118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.367150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.367168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.367182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.367214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.374676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.374795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.374840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.426 [2024-10-01 13:44:04.374861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.374895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.374927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.374945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.374960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.374992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.378065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.378190] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.378232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.426 [2024-10-01 13:44:04.378253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.378286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.378319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.378337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.378351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.378382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.385106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.385230] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.385281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.426 [2024-10-01 13:44:04.385320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.385356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.385389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.385407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.385422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.385455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.389314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.389434] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.389483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.426 [2024-10-01 13:44:04.389503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.389550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.389587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.389606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.389620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.389653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.396026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.396145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.396191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.426 [2024-10-01 13:44:04.396211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.396245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.396278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.396295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.396309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.396341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.399650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.399767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.399808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.426 [2024-10-01 13:44:04.399827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.399861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.399908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.399943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.399959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.399992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.406989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.407107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.407154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.426 [2024-10-01 13:44:04.407174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.407208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.407241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.407258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.407273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.407305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.410357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.410474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.410519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.426 [2024-10-01 13:44:04.410553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.410590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.410623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.410640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.410655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.410686] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.417957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.418077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.418120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.426 [2024-10-01 13:44:04.418141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.418174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.418207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.418224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.418238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.418270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.421180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.421297] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.421345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.426 [2024-10-01 13:44:04.421366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.421399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.421431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.421449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.421464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.421496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.428060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.428177] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.428227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.426 [2024-10-01 13:44:04.428247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.428281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.428322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.428339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.428354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.428645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.432075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.432190] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.432235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.426 [2024-10-01 13:44:04.432255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.432288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.432321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.432338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.432353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.432384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.438599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.438717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.438755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.426 [2024-10-01 13:44:04.438773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.438826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.438860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.438877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.438891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.438924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.442171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.442288] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.442331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.426 [2024-10-01 13:44:04.442351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.442385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.442417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.442435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.442450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.442482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.449453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.426 [2024-10-01 13:44:04.449583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.426 [2024-10-01 13:44:04.449617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.426 [2024-10-01 13:44:04.449635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.426 [2024-10-01 13:44:04.449670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.426 [2024-10-01 13:44:04.449703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.426 [2024-10-01 13:44:04.449720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.426 [2024-10-01 13:44:04.449734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.426 [2024-10-01 13:44:04.449766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.426 [2024-10-01 13:44:04.452787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.452902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.452934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.427 [2024-10-01 13:44:04.452952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.452985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.453016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.453034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.453064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.453099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.460379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.460497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.460555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.427 [2024-10-01 13:44:04.460577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.460612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.460645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.460663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.460677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.460709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.463608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.463721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.463765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.427 [2024-10-01 13:44:04.463785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.463818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.463850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.463868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.463893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.463927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.471187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.471305] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.471337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.427 [2024-10-01 13:44:04.471356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.471390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.471439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.471462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.471477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.471510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.473700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.473831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.473864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.427 [2024-10-01 13:44:04.473881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.473915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.473948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.473965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.473980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.475299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.482122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.482973] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.483019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.427 [2024-10-01 13:44:04.483040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.483215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.483309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.483337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.483352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.483387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.484783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.484897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.484929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.427 [2024-10-01 13:44:04.484947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.486015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.486668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.486707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.486725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.486825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.492465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.492598] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.492642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.427 [2024-10-01 13:44:04.492662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.492696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.492753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.492773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.492787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.492819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.494877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.494988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.495020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.427 [2024-10-01 13:44:04.495038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.496271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.496509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.496556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.496576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.497327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.502571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.502688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.502732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.427 [2024-10-01 13:44:04.502750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.504080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.505055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.505096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.505114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.505257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.505312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.505405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.505436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.427 [2024-10-01 13:44:04.505454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.505487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.505519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.505551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.505569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.505621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.513418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.513549] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.513582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.427 [2024-10-01 13:44:04.513599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.514665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.515302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.515341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.515359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.515455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.515518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.515630] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.515661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.427 [2024-10-01 13:44:04.515678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.515711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.515984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.516023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.516041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.516195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.523511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.523641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.523673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.427 [2024-10-01 13:44:04.523692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.523725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.524940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.524980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.524998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.525230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.526201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.526314] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.526355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.427 [2024-10-01 13:44:04.526390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.526426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.527514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.527568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.527586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.527806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.533803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.533920] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.533952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.427 [2024-10-01 13:44:04.533971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.534004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.534036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.534053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.534067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.534099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.536987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.537103] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.537147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.427 [2024-10-01 13:44:04.537167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.537201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.537233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.537251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.537266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.537297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.543913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.544030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.544062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.427 [2024-10-01 13:44:04.544081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.544113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.544146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.544181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.544197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.544231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.548059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.548175] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.548208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.427 [2024-10-01 13:44:04.548226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.427 [2024-10-01 13:44:04.548259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.427 [2024-10-01 13:44:04.548292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.427 [2024-10-01 13:44:04.548309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.427 [2024-10-01 13:44:04.548324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.427 [2024-10-01 13:44:04.548356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.427 [2024-10-01 13:44:04.554555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.427 [2024-10-01 13:44:04.554691] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.427 [2024-10-01 13:44:04.554743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.427 [2024-10-01 13:44:04.554764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.554798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.554831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.554849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.554863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.554896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.558149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.558264] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.558310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.428 [2024-10-01 13:44:04.558331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.558364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.558397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.558415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.558429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.558705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.565355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.565472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.565505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.428 [2024-10-01 13:44:04.565522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.565571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.565606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.565624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.565638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.565670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.568639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.568759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.568806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.428 [2024-10-01 13:44:04.568826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.568860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.568893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.568911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.568925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.568956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.576197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.576315] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.576363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.428 [2024-10-01 13:44:04.576383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.576417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.576450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.576467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.576482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.576514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.579400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.579514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.579569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.428 [2024-10-01 13:44:04.579591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.579645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.579679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.579696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.579710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.579742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.586291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.586408] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.586451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.428 [2024-10-01 13:44:04.586472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.586506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.586552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.586573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.586588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.586854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.590334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.590450] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.590482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.428 [2024-10-01 13:44:04.590500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.590546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.590583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.590601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.590615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.590647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.596862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.596981] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.597020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.428 [2024-10-01 13:44:04.597040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.597073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.597106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.597123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.597156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.597190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.600427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.600558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.600604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.428 [2024-10-01 13:44:04.600624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.600659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.600692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.600710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.600725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.600985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.607648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.607765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.607811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.428 [2024-10-01 13:44:04.607831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.607865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.607911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.607936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.607950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.607982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.610923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.611038] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.611082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.428 [2024-10-01 13:44:04.611102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.611136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.611168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.611186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.611200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.611232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.618509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.618661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.618712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.428 [2024-10-01 13:44:04.618733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.618767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.618800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.618817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.618832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.618865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.621754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.621868] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.621912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.428 [2024-10-01 13:44:04.621933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.621966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.621998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.622016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.622030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.622061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.628640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.628761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.628800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.428 [2024-10-01 13:44:04.628818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.628851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.628884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.628901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.628915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.629179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.632666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.632782] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.632820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.428 [2024-10-01 13:44:04.632840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.632873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.632923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.632943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.632958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.632990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.639120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.639239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.639281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.428 [2024-10-01 13:44:04.639301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.639335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.639367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.639385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.639399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.639431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.642759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.642872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.642916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.428 [2024-10-01 13:44:04.642937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.642971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.643002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.428 [2024-10-01 13:44:04.643020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.428 [2024-10-01 13:44:04.643034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.428 [2024-10-01 13:44:04.643294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.428 [2024-10-01 13:44:04.649924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.428 [2024-10-01 13:44:04.650041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.428 [2024-10-01 13:44:04.650084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.428 [2024-10-01 13:44:04.650104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.428 [2024-10-01 13:44:04.650138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.428 [2024-10-01 13:44:04.650170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.650187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.650202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.650253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.653208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.653325] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.653370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.429 [2024-10-01 13:44:04.653390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.653423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.653454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.653471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.653486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.653517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.660811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.660929] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.660976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.429 [2024-10-01 13:44:04.660996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.661029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.661061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.661078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.661093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.661124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.663298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.663409] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.663454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.429 [2024-10-01 13:44:04.663474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.663507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.663556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.663577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.663592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.663624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 8611.58 IOPS, 33.64 MiB/s [2024-10-01 13:44:04.672368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.672488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.672557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.429 [2024-10-01 13:44:04.672581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.672616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.672649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.672667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.672681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.672714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.673385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.673497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.673552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.429 [2024-10-01 13:44:04.673574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.673608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.673640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.673658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.673672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.673703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.683333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.683629] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.683674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.429 [2024-10-01 13:44:04.683695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.683790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.683836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.683868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.683899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.683915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.683947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.684010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.684038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.429 [2024-10-01 13:44:04.684056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.685148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.685389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.685425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.685443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.686509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.693430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.693563] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.693606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.429 [2024-10-01 13:44:04.693626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.694554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.694789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.694826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.694844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.694890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.694916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.694997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.695029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.429 [2024-10-01 13:44:04.695047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.695080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.695112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.695130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.695144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.695175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.705756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.705834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.705917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.705947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.429 [2024-10-01 13:44:04.705965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.706033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.706061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.429 [2024-10-01 13:44:04.706078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.706096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.706151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.706173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.706187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.706201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.706233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.706252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.706267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.706281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.706310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.715873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.716014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.716057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.429 [2024-10-01 13:44:04.716079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.716115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.716150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.716223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.716252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.429 [2024-10-01 13:44:04.716269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.716284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.716297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.716311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.716590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.716624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.716768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.716805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.716822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.716933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.726590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.726638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.726737] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.726792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.429 [2024-10-01 13:44:04.726814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.726867] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.726893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.429 [2024-10-01 13:44:04.726909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.726943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.726967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.728064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.728105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.728124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.728142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.728157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.728170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.728400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.728428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.737382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.737432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.737530] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.737576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.429 [2024-10-01 13:44:04.737595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.737647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.737672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.429 [2024-10-01 13:44:04.737689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.737723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.737746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.737772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.737789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.737803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.737820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.737835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.737864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.737898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.737918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.748259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.748310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.748409] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.748448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.429 [2024-10-01 13:44:04.748467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.748518] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.429 [2024-10-01 13:44:04.748558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.429 [2024-10-01 13:44:04.748578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.429 [2024-10-01 13:44:04.748612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.748635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.429 [2024-10-01 13:44:04.748661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.748679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.748693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.748711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.429 [2024-10-01 13:44:04.748726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.429 [2024-10-01 13:44:04.748739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.429 [2024-10-01 13:44:04.748771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.748790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.429 [2024-10-01 13:44:04.758393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.758469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.429 [2024-10-01 13:44:04.758567] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.758604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.430 [2024-10-01 13:44:04.758623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.758693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.758721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.430 [2024-10-01 13:44:04.758738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.758757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.759021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.759084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.759103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.759117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.759254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.759279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.759294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.759309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.759418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.768966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.769016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.769114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.769146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.430 [2024-10-01 13:44:04.769165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.769214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.769239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.430 [2024-10-01 13:44:04.769256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.769289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.769312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.770396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.770436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.770455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.770472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.770488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.770501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.770742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.770770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.779758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.779807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.779915] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.779948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.430 [2024-10-01 13:44:04.779983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.780038] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.780063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.430 [2024-10-01 13:44:04.780080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.780116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.780150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.780177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.780195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.780210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.780226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.780241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.780255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.780286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.780306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.790646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.790697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.790795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.790832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.430 [2024-10-01 13:44:04.790852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.790902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.790927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.430 [2024-10-01 13:44:04.790944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.790977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.791000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.791027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.791044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.791059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.791076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.791091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.791104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.791155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.791177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.800782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.800833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.800931] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.800963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.430 [2024-10-01 13:44:04.800982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.801031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.801056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.430 [2024-10-01 13:44:04.801072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.801334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.801378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.801523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.801571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.801589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.801607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.801622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.801636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.801748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.801770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.811237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.811287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.811385] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.811423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.430 [2024-10-01 13:44:04.811442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.811493] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.811517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.430 [2024-10-01 13:44:04.811550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.811589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.811613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.812705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.812760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.812779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.812797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.812813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.812826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.813046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.813074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.822002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.822052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.822149] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.822181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.430 [2024-10-01 13:44:04.822199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.822249] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.822274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.430 [2024-10-01 13:44:04.822290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.822324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.822348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.822375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.822392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.822407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.822424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.822440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.822453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.822485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.822504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.832956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.833006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.833104] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.833136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.430 [2024-10-01 13:44:04.833154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.833224] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.833251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.430 [2024-10-01 13:44:04.833268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.833302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.833326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.833352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.833370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.833384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.833401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.833416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.833429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.833461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.833480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.843089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.843164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.843248] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.843278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.430 [2024-10-01 13:44:04.843296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.843363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.843391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.430 [2024-10-01 13:44:04.843408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.430 [2024-10-01 13:44:04.843426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.843707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.430 [2024-10-01 13:44:04.843748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.843766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.843781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.843938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.843966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.430 [2024-10-01 13:44:04.843981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.430 [2024-10-01 13:44:04.843994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.430 [2024-10-01 13:44:04.844104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.430 [2024-10-01 13:44:04.853553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.853603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.430 [2024-10-01 13:44:04.853703] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.430 [2024-10-01 13:44:04.853741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.430 [2024-10-01 13:44:04.853761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.853813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.853838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.431 [2024-10-01 13:44:04.853855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.853889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.853913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.854995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.855031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.855048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.855065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.855080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.855094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.855312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.855339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.864359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.864408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.864507] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.864554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.431 [2024-10-01 13:44:04.864575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.864627] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.864652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.431 [2024-10-01 13:44:04.864669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.864702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.864725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.864752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.864771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.864803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.864821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.864837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.864850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.864884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.864904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.875259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.875313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.875413] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.875451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.431 [2024-10-01 13:44:04.875472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.875527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.875569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.431 [2024-10-01 13:44:04.875587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.875622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.875646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.875673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.875690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.875704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.875722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.875737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.875751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.875782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.875802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.885393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.885469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.885566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.885611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.431 [2024-10-01 13:44:04.885632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.885702] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.885730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.431 [2024-10-01 13:44:04.885765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.885786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.886051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.886091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.886109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.886124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.886270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.886302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.886319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.886333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.886444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.895955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.896005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.896104] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.896137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.431 [2024-10-01 13:44:04.896155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.896204] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.896229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.431 [2024-10-01 13:44:04.896245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.896279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.896302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.897386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.897427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.897445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.897463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.897478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.897491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.897733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.897761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.906732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.906797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.906897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.906931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.431 [2024-10-01 13:44:04.906949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.906999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.907023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.431 [2024-10-01 13:44:04.907040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.907073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.907097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.907123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.907141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.907155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.907173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.907187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.907201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.907232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.907251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.917648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.917699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.917797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.917829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.431 [2024-10-01 13:44:04.917847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.917901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.917927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.431 [2024-10-01 13:44:04.917943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.917976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.917999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.918026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.918043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.918057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.918092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.918110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.918124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.918156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.918176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.927778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.927855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.927952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.927984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.431 [2024-10-01 13:44:04.928002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.928076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.928103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.431 [2024-10-01 13:44:04.928120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.928139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.928403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.431 [2024-10-01 13:44:04.928443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.928461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.928476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.928636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.928662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.431 [2024-10-01 13:44:04.928677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.431 [2024-10-01 13:44:04.928692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.431 [2024-10-01 13:44:04.928801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.431 [2024-10-01 13:44:04.938287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.938336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.431 [2024-10-01 13:44:04.938436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.431 [2024-10-01 13:44:04.938468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.431 [2024-10-01 13:44:04.938486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.431 [2024-10-01 13:44:04.938550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:04.938578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.432 [2024-10-01 13:44:04.938594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:04.938649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:04.938673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:04.939763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:04.939803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:04.939829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:04.939848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:04.939863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:04.939886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:04.940130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:04.940160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:04.949096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:04.949146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:04.949245] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:04.949282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.432 [2024-10-01 13:44:04.949302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:04.949352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:04.949377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.432 [2024-10-01 13:44:04.949393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:04.949427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:04.949450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:04.949476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:04.949500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:04.949515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:04.949532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:04.949564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:04.949579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:04.949612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:04.949631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:04.960045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:04.960096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:04.960228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:04.960260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.432 [2024-10-01 13:44:04.960279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:04.960329] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:04.960354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.432 [2024-10-01 13:44:04.960370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:04.960404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:04.960427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:04.960454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:04.960472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:04.960486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:04.960503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:04.960518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:04.960531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:04.960586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:04.960606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:04.970206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:04.970283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:04.970365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:04.970396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.432 [2024-10-01 13:44:04.970414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:04.970482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:04.970510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.432 [2024-10-01 13:44:04.970526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:04.970562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:04.970829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:04.970869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:04.970887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:04.970902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:04.971048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:04.971088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:04.971106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:04.971121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:04.971232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:04.980757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:04.980807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:04.980906] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:04.980937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.432 [2024-10-01 13:44:04.980955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:04.981005] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:04.981030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.432 [2024-10-01 13:44:04.981046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:04.981079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:04.981102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:04.982185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:04.982225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:04.982244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:04.982262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:04.982277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:04.982290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:04.982508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:04.982552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:04.991517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:04.991583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:04.991685] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:04.991717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.432 [2024-10-01 13:44:04.991735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:04.991785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:04.991810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.432 [2024-10-01 13:44:04.991826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:04.991859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:04.991921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:04.991953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:04.991972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:04.991987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:04.992004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:04.992020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:04.992033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:04.992065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:04.992084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:05.002606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:05.002681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:05.002805] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:05.002840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.432 [2024-10-01 13:44:05.002859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:05.002910] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:05.002935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.432 [2024-10-01 13:44:05.002952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:05.002988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:05.003012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:05.003038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:05.003056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:05.003072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:05.003089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:05.003105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:05.003118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:05.003150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:05.003170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:05.012762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:05.012843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:05.012932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:05.013000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.432 [2024-10-01 13:44:05.013023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:05.013328] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:05.013371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.432 [2024-10-01 13:44:05.013391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:05.013411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:05.013574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:05.013604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:05.013619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:05.013634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:05.013745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:05.013768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:05.013783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:05.013797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:05.013835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:05.023198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:05.023249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:05.023347] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:05.023385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.432 [2024-10-01 13:44:05.023405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:05.023456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:05.023481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.432 [2024-10-01 13:44:05.023498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:05.023532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:05.023574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:05.024673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:05.024713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:05.024731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:05.024749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:05.024765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:05.024801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:05.025035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:05.025073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:05.034039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:05.034092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.432 [2024-10-01 13:44:05.034194] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:05.034250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.432 [2024-10-01 13:44:05.034271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:05.034323] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.432 [2024-10-01 13:44:05.034348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.432 [2024-10-01 13:44:05.034365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.432 [2024-10-01 13:44:05.034399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:05.034423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.432 [2024-10-01 13:44:05.034449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:05.034467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:05.034482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:05.034499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.432 [2024-10-01 13:44:05.034514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.432 [2024-10-01 13:44:05.034528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.432 [2024-10-01 13:44:05.034578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:05.034599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.432 [2024-10-01 13:44:05.045001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.045075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.045191] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.045225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.433 [2024-10-01 13:44:05.045244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.045295] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.045320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.433 [2024-10-01 13:44:05.045337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.045372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.045396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.045452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.045471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.045488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.045505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.045520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.045549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.045587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.045608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.055173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.055295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.055411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.055444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.433 [2024-10-01 13:44:05.055464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.055548] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.055578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.433 [2024-10-01 13:44:05.055596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.055618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.055908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.055950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.055968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.055984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.056134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.056162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.056177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.056191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.056307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.065813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.065867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.065968] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.066000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.433 [2024-10-01 13:44:05.066054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.066112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.066138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.433 [2024-10-01 13:44:05.066155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.067250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.067297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.067529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.067583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.067601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.067619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.067634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.067647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.068731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.068769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.076584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.076636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.076738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.076777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.433 [2024-10-01 13:44:05.076797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.076847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.076872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.433 [2024-10-01 13:44:05.076889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.076923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.076946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.076973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.076991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.077006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.077023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.077039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.077052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.077101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.077123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.087435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.087489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.087607] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.087643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.433 [2024-10-01 13:44:05.087662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.087714] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.087739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.433 [2024-10-01 13:44:05.087756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.087789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.087813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.087840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.087858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.087872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.087907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.087923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.087937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.087970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.087991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.097586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.097664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.097749] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.097802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.433 [2024-10-01 13:44:05.097822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.098127] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.098170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.433 [2024-10-01 13:44:05.098190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.098210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.098355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.098389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.098426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.098443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.098571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.098597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.098612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.098626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.098665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.108070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.108121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.108221] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.108254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.433 [2024-10-01 13:44:05.108271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.108320] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.108344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.433 [2024-10-01 13:44:05.108361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.108394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.108417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.109505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.109556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.109576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.109601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.109629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.109644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.109927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.109968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.118848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.118898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.118999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.119031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.433 [2024-10-01 13:44:05.119049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.119124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.119151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.433 [2024-10-01 13:44:05.119168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.119203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.119226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.119254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.119272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.119286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.119303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.119318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.119332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.119364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.119384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.129711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.129765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.433 [2024-10-01 13:44:05.129866] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.129904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.433 [2024-10-01 13:44:05.129924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.129975] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.433 [2024-10-01 13:44:05.130000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.433 [2024-10-01 13:44:05.130016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.433 [2024-10-01 13:44:05.130050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.130073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.433 [2024-10-01 13:44:05.130100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.130118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.130132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.130149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.433 [2024-10-01 13:44:05.130164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.433 [2024-10-01 13:44:05.130178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.433 [2024-10-01 13:44:05.130209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.130243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.433 [2024-10-01 13:44:05.139849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.139937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.140026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.140073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.434 [2024-10-01 13:44:05.140094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.140164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.140193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.434 [2024-10-01 13:44:05.140209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.140229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.140493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.140548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.140570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.140584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.140718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.140742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.140756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.140770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.140878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.150423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.150505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.150643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.150679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.434 [2024-10-01 13:44:05.150698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.150750] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.150774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.434 [2024-10-01 13:44:05.150790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.151920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.151967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.152207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.152246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.152298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.152318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.152335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.152348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.153445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.153484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.161460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.161563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.161700] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.161739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.434 [2024-10-01 13:44:05.161759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.161811] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.161836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.434 [2024-10-01 13:44:05.161853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.161889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.161913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.161940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.161957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.161974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.161992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.162007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.162021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.162053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.162073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.172454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.172508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.172623] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.172657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.434 [2024-10-01 13:44:05.172675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.172726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.172751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.434 [2024-10-01 13:44:05.172794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.172830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.172853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.172881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.172898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.172913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.172930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.172945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.172959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.172990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.173010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.182608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.182661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.182768] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.182806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.434 [2024-10-01 13:44:05.182826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.182878] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.182903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.434 [2024-10-01 13:44:05.182920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.183183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.183227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.183370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.183406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.183424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.183442] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.183457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.183470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.183596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.183621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.192970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.193035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.193136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.193168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.434 [2024-10-01 13:44:05.193186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.193236] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.193260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.434 [2024-10-01 13:44:05.193277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.193310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.193334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.194419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.194459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.194478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.194497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.194512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.194526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.194759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.194787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.203747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.203796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.203903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.203935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.434 [2024-10-01 13:44:05.203953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.204004] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.204029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.434 [2024-10-01 13:44:05.204045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.204078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.204101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.204127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.204145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.204159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.204193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.204211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.204224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.204257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.204276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.214657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.214710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.214808] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.214841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.434 [2024-10-01 13:44:05.214859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.214908] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.214942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.434 [2024-10-01 13:44:05.214958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.214991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.215014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.215041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.215058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.215072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.215089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.215104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.215118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.215150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.215170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.224794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.224845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.224942] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.224974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.434 [2024-10-01 13:44:05.224992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.225041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.225066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.434 [2024-10-01 13:44:05.225083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.225366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.225411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.225571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.225607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.225625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.225643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.225658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.225671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.225783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.225806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.434 [2024-10-01 13:44:05.235190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.235240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.434 [2024-10-01 13:44:05.235338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.235378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.434 [2024-10-01 13:44:05.235396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.235446] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.434 [2024-10-01 13:44:05.235471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.434 [2024-10-01 13:44:05.235487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.434 [2024-10-01 13:44:05.235520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.235559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.434 [2024-10-01 13:44:05.236670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.236710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.236729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.236746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.434 [2024-10-01 13:44:05.236761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.434 [2024-10-01 13:44:05.236775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.434 [2024-10-01 13:44:05.237005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.237033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.246047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.246096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.246215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.246254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.435 [2024-10-01 13:44:05.246273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.246324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.246348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.435 [2024-10-01 13:44:05.246364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.246397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.246421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.246448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.246465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.246480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.246497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.246512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.246526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.246574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.246595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.256954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.257005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.257103] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.257135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.435 [2024-10-01 13:44:05.257153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.257203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.257228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.435 [2024-10-01 13:44:05.257244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.257277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.257301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.257327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.257345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.257359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.257375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.257407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.257423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.257456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.257476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.267086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.267164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.267249] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.267295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.435 [2024-10-01 13:44:05.267316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.267633] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.267676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.435 [2024-10-01 13:44:05.267696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.267716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.267850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.267876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.267904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.267918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.268029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.268052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.268066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.268081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.268119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.277562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.277611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.277709] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.277741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.435 [2024-10-01 13:44:05.277759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.277809] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.277834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.435 [2024-10-01 13:44:05.277850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.277883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.277926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.279012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.279053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.279072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.279090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.279105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.279119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.279358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.279387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.288394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.288444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.288555] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.288588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.435 [2024-10-01 13:44:05.288606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.288658] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.288683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.435 [2024-10-01 13:44:05.288700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.288733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.288757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.288784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.288801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.288815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.288832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.288847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.288860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.288892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.288911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.299283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.299334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.299432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.299500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.435 [2024-10-01 13:44:05.299523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.299593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.299621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.435 [2024-10-01 13:44:05.299637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.299672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.299696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.299723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.299741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.299755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.299772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.299787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.299801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.299833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.299852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.309412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.309487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.309587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.309619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.435 [2024-10-01 13:44:05.309638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.309708] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.309736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.435 [2024-10-01 13:44:05.309753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.309772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.310041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.310093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.310111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.310125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.310259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.310282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.310312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.310327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.310437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.320010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.320064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.320164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.320206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.435 [2024-10-01 13:44:05.320225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.320275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.320300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.435 [2024-10-01 13:44:05.320317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.320350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.320373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.321464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.321503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.321522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.321552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.321572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.321586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.321823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.321851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.330866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.330916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.331014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.331052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.435 [2024-10-01 13:44:05.331070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.331119] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.435 [2024-10-01 13:44:05.331144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.435 [2024-10-01 13:44:05.331161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.435 [2024-10-01 13:44:05.331194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.331217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.435 [2024-10-01 13:44:05.331269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.331288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.331302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.331320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.435 [2024-10-01 13:44:05.331335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.435 [2024-10-01 13:44:05.331348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.435 [2024-10-01 13:44:05.331380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.331398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.435 [2024-10-01 13:44:05.341785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.341836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.435 [2024-10-01 13:44:05.341935] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.341968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.436 [2024-10-01 13:44:05.341986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.342036] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.342060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.436 [2024-10-01 13:44:05.342077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.342110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.342133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.342160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.342178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.342192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.342208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.342223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.342236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.342269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.342288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.351926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.352001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.352084] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.352138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.436 [2024-10-01 13:44:05.352176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.352250] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.352278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.436 [2024-10-01 13:44:05.352295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.352314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.352593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.352634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.352652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.352666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.352814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.352840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.352855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.352869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.352978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.362421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.362470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.362582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.362614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.436 [2024-10-01 13:44:05.362632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.362682] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.362706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.436 [2024-10-01 13:44:05.362723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.362756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.362779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.363861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.363910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.363928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.363946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.363961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.363974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.364210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.364245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.373211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.373260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.373358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.373389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.436 [2024-10-01 13:44:05.373407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.373456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.373481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.436 [2024-10-01 13:44:05.373497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.373530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.373573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.373603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.373620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.373635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.373652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.373667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.373681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.373712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.373731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.384130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.384180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.384277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.384308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.436 [2024-10-01 13:44:05.384326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.384375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.384400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.436 [2024-10-01 13:44:05.384416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.384448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.384471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.384498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.384548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.384567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.384584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.384600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.384613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.384646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.384666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.394261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.394345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.394433] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.394470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.436 [2024-10-01 13:44:05.394490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.394574] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.394604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.436 [2024-10-01 13:44:05.394621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.394642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.394908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.394949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.394968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.394983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.395129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.395156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.395171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.395186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.395296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.405178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.405267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.405401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.405436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.436 [2024-10-01 13:44:05.405455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.405560] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.405589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.436 [2024-10-01 13:44:05.405606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.405643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.405668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.405714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.405737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.405754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.405771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.405787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.405800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.405833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.405853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.415352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.415434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.415518] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.415562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.436 [2024-10-01 13:44:05.415582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.415651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.415679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.436 [2024-10-01 13:44:05.415695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.415714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.415747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.415768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.415792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.415807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.415839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.415859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.415873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.415901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.416854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.425454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.425584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.425629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.436 [2024-10-01 13:44:05.425650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.425699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.425741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.425774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.425791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.425805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.425836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.425897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.425924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.436 [2024-10-01 13:44:05.425940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.427272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.428254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.428294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.428312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.428428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.436 [2024-10-01 13:44:05.436808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.436857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.436 [2024-10-01 13:44:05.436956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.436993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.436 [2024-10-01 13:44:05.437013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.437063] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.436 [2024-10-01 13:44:05.437088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.436 [2024-10-01 13:44:05.437104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.436 [2024-10-01 13:44:05.438173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.438217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.436 [2024-10-01 13:44:05.438839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.438877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.438915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.438935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.436 [2024-10-01 13:44:05.438950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.436 [2024-10-01 13:44:05.438964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.436 [2024-10-01 13:44:05.439040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.439063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.446938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.447013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.447095] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.447126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.437 [2024-10-01 13:44:05.447144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.447211] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.447238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.437 [2024-10-01 13:44:05.447255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.447274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.447307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.447328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.447342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.447356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.447387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.447407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.447421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.447435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.448662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.457633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.457683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.457783] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.457816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.437 [2024-10-01 13:44:05.457834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.457884] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.457930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.437 [2024-10-01 13:44:05.457949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.457983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.458007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.458034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.458052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.458066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.458083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.458098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.458111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.458144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.458164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.468120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.468177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.468280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.468312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.437 [2024-10-01 13:44:05.468330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.468380] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.468406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.437 [2024-10-01 13:44:05.468422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.468455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.468479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.468505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.468523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.468552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.468572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.468589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.468602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.468864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.468891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.478950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.479064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.479196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.479231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.437 [2024-10-01 13:44:05.479250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.479301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.479326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.437 [2024-10-01 13:44:05.479343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.480458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.480504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.480752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.480790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.480808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.480827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.480842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.480856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.481937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.481975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.489811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.489861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.489960] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.489998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.437 [2024-10-01 13:44:05.490016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.490066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.490090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.437 [2024-10-01 13:44:05.490107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.490139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.490162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.490188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.490207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.490221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.490254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.490277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.490291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.490323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.490343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.500733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.500784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.500881] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.500913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.437 [2024-10-01 13:44:05.500930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.500980] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.501006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.437 [2024-10-01 13:44:05.501022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.501055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.501079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.501105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.501123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.501138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.501155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.501170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.501184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.501215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.501235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.510874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.510986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.511095] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.511129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.437 [2024-10-01 13:44:05.511147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.511215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.511243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.437 [2024-10-01 13:44:05.511336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.511361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.511653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.511695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.511713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.511728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.511887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.511915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.511930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.511945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.512057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.521480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.521532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.521651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.521684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.437 [2024-10-01 13:44:05.521703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.521753] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.521778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.437 [2024-10-01 13:44:05.521795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.521828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.521851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.437 [2024-10-01 13:44:05.522944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.522984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.523003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.523021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.437 [2024-10-01 13:44:05.523037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.437 [2024-10-01 13:44:05.523050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.437 [2024-10-01 13:44:05.523297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.523327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.437 [2024-10-01 13:44:05.532530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.532636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.437 [2024-10-01 13:44:05.532808] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.532849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.437 [2024-10-01 13:44:05.532871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.437 [2024-10-01 13:44:05.532922] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.437 [2024-10-01 13:44:05.532948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.437 [2024-10-01 13:44:05.532964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.533000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.533024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.533052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.533070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.533086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.533103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.533118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.533131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.533164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.533184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.543432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.543483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.543597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.543631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.438 [2024-10-01 13:44:05.543648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.543699] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.543724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.438 [2024-10-01 13:44:05.543740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.543774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.543798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.543825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.543842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.543856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.543873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.543919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.543935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.543969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.543989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.553586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.553667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.553754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.553785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.438 [2024-10-01 13:44:05.553803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.553871] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.553899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.438 [2024-10-01 13:44:05.553916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.553935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.553968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.553989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.554003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.554017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.554279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.554308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.554324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.554338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.554482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.564335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.564385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.564486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.564519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.438 [2024-10-01 13:44:05.564552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.564608] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.564634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.438 [2024-10-01 13:44:05.564650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.565796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.565844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.566081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.566119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.566137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.566155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.566171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.566184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.567265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.567303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.575395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.575444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.575556] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.575588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.438 [2024-10-01 13:44:05.575606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.575657] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.575682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.438 [2024-10-01 13:44:05.575698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.575732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.575761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.575788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.575806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.575821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.575838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.575853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.575867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.575910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.575931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.586446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.586511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.586632] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.586691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.438 [2024-10-01 13:44:05.586713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.586766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.586791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.438 [2024-10-01 13:44:05.586808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.586843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.586867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.586894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.586912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.586927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.586944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.586959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.586973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.587006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.587026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.596852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.596927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.597045] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.597079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.438 [2024-10-01 13:44:05.597097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.597148] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.597172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.438 [2024-10-01 13:44:05.597189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.597223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.597247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.597274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.597292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.597307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.597324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.597339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.597375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.597661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.597690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.607553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.607602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.607701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.607733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.438 [2024-10-01 13:44:05.607751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.607801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.607826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.438 [2024-10-01 13:44:05.607849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.607893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.607918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.609019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.609061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.609079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.609097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.609112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.609126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.609348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.609375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.618394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.618444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.618591] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.618629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.438 [2024-10-01 13:44:05.618649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.618704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.618730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.438 [2024-10-01 13:44:05.618747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.618782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.618824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.618855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.618873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.618887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.618905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.618920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.618934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.618965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.618984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.629310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.629372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.629476] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.629509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.438 [2024-10-01 13:44:05.629527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.629597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.629623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.438 [2024-10-01 13:44:05.629640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.438 [2024-10-01 13:44:05.629674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.629697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.438 [2024-10-01 13:44:05.629724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.629742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.629757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.629774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.438 [2024-10-01 13:44:05.629789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.438 [2024-10-01 13:44:05.629802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.438 [2024-10-01 13:44:05.629835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.629855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.438 [2024-10-01 13:44:05.639649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.639742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.438 [2024-10-01 13:44:05.639889] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.438 [2024-10-01 13:44:05.639926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.438 [2024-10-01 13:44:05.639977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.640033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.640059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.439 [2024-10-01 13:44:05.640076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.640356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.640401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.640565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.640601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.640619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.640638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.640653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.640667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.640801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.640827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.650444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.650512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.650647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.650682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.439 [2024-10-01 13:44:05.650701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.650750] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.650775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.439 [2024-10-01 13:44:05.650791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.651901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.651947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.652150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.652185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.652204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.652221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.652236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.652250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.653345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.653385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.661313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.661362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.661461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.661493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.439 [2024-10-01 13:44:05.661511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.661575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.661603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.439 [2024-10-01 13:44:05.661620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.661655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.661678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.661705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.661722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.661737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.661754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.661769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.661783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.661814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.661833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 8642.08 IOPS, 33.76 MiB/s [2024-10-01 13:44:05.672352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.672400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.672498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.672529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.439 [2024-10-01 13:44:05.672563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.672617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.672642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.439 [2024-10-01 13:44:05.672659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.672692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.672715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.672764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.672784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.672798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.672815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.672830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.672843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.672875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.672895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.682484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.682580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.682665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.682713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.439 [2024-10-01 13:44:05.682733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.683042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.683084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.439 [2024-10-01 13:44:05.683104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.683124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.683269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.683298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.683313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.683328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.683440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.683463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.683478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.683492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.683530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.693060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.693121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.693231] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.693264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.439 [2024-10-01 13:44:05.693283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.693366] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.693393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.439 [2024-10-01 13:44:05.693410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.694510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.694569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.694806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.694845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.694863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.694881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.694897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.694910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.694953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.694975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.703199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.703279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.703361] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.703392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.439 [2024-10-01 13:44:05.703410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.704394] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.704438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.439 [2024-10-01 13:44:05.704459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.704478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.704692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.704733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.704751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.704766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.704808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.704830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.704845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.704859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.704909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.715586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.715636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.715734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.715765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.439 [2024-10-01 13:44:05.715783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.715832] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.715856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.439 [2024-10-01 13:44:05.715873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.715919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.715943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.715970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.715988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.716003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.716019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.716035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.716048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.716081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.716100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.725819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.725903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.726026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.726061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.439 [2024-10-01 13:44:05.726081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.726131] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.726156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.439 [2024-10-01 13:44:05.726173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.726218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.726241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.726269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.726307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.726324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.726342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.726357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.726370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.726656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.726685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.736624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.736673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.736776] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.736808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.439 [2024-10-01 13:44:05.736826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.736876] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.439 [2024-10-01 13:44:05.736901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.439 [2024-10-01 13:44:05.736917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.439 [2024-10-01 13:44:05.736951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.736974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.439 [2024-10-01 13:44:05.738061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.738101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.738120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.738138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.439 [2024-10-01 13:44:05.738153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.439 [2024-10-01 13:44:05.738167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.439 [2024-10-01 13:44:05.738396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.738424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.439 [2024-10-01 13:44:05.747490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.439 [2024-10-01 13:44:05.747553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.747654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.747701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.440 [2024-10-01 13:44:05.747721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.747772] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.747814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.440 [2024-10-01 13:44:05.747833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.747868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.747903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.747932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.747950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.747965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.747982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.747997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.748010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.748042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.748062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.758422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.758473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.758584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.758617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.440 [2024-10-01 13:44:05.758635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.758686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.758711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.440 [2024-10-01 13:44:05.758728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.758762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.758786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.758813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.758831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.758845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.758861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.758877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.758890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.758922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.758941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.768570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.768645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.768728] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.768760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.440 [2024-10-01 13:44:05.768779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.768845] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.768873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.440 [2024-10-01 13:44:05.768889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.768908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.768941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.768962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.768976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.768991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.769253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.769281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.769297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.769311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.769447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.779185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.779234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.779334] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.779372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.440 [2024-10-01 13:44:05.779391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.779442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.779472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.440 [2024-10-01 13:44:05.779489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.779522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.779563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.780662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.780700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.780738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.780758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.780773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.780788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.781017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.781055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.790026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.790075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.790173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.790204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.440 [2024-10-01 13:44:05.790222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.790272] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.790297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.440 [2024-10-01 13:44:05.790314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.790347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.790370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.790397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.790415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.790429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.790446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.790461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.790474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.790506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.790526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.800933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.800983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.801081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.801113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.440 [2024-10-01 13:44:05.801131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.801181] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.801206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.440 [2024-10-01 13:44:05.801242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.801278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.801301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.801328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.801346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.801361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.801378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.801393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.801407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.801439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.801458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.811067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.811117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.811214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.811246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.440 [2024-10-01 13:44:05.811263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.811313] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.811338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.440 [2024-10-01 13:44:05.811354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.811387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.811411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.811683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.811722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.811740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.811758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.811773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.811787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.811944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.811970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.821587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.821653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.821755] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.821787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.440 [2024-10-01 13:44:05.821805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.821855] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.821880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.440 [2024-10-01 13:44:05.821897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.821929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.821953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.823039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.823078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.823097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.823114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.823129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.823143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.823362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.823389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.832821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.832906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.832989] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.833029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.440 [2024-10-01 13:44:05.833049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.833119] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.833154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.440 [2024-10-01 13:44:05.833170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.833190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.833224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.440 [2024-10-01 13:44:05.833244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.833259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.833273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.833329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.833351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.440 [2024-10-01 13:44:05.833365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.440 [2024-10-01 13:44:05.833379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.440 [2024-10-01 13:44:05.833409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.440 [2024-10-01 13:44:05.843834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.843892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.440 [2024-10-01 13:44:05.843993] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.440 [2024-10-01 13:44:05.844025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.440 [2024-10-01 13:44:05.844043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.440 [2024-10-01 13:44:05.844094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.844118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.441 [2024-10-01 13:44:05.844135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.844168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.844191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.844218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.844235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.844250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.844266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.844282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.844296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.844328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.844347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.854287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.854336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.854433] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.854465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.441 [2024-10-01 13:44:05.854483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.854546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.854574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.441 [2024-10-01 13:44:05.854591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.854647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.854672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.854929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.854967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.854985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.855002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.855018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.855033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.855177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.855203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.865194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.865244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.865346] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.865379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.441 [2024-10-01 13:44:05.865397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.865447] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.865472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.441 [2024-10-01 13:44:05.865488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.865520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.865560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.866658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.866697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.866715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.866732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.866748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.866761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.866987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.867024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.876264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.876321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.876461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.876494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.441 [2024-10-01 13:44:05.876513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.876582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.876609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.441 [2024-10-01 13:44:05.876626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.876661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.876686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.876714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.876731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.876746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.876764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.876780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.876793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.876825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.876844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.887764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.887847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.888018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.888067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.441 [2024-10-01 13:44:05.888094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.888151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.888176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.441 [2024-10-01 13:44:05.888193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.888228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.888253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.888280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.888298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.888314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.888332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.888368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.888385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.888448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.888473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.897966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.898044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.898129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.898172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.441 [2024-10-01 13:44:05.898193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.898262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.898290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.441 [2024-10-01 13:44:05.898307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.898327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.898617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.898658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.898676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.898691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.898823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.898847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.898861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.898876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.898988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.908669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.908723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.908825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.908857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.441 [2024-10-01 13:44:05.908875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.908925] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.908950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.441 [2024-10-01 13:44:05.908966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.909000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.909047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.910143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.910184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.910203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.910221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.910237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.910250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.910491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.910530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.919524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.919587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.919686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.919718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.441 [2024-10-01 13:44:05.919737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.919786] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.919811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.441 [2024-10-01 13:44:05.919828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.919861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.919897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.919927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.919945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.919959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.919976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.919991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.920005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.920037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.920055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.930444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.930501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.930622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.930654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.441 [2024-10-01 13:44:05.930693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.930749] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.930775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.441 [2024-10-01 13:44:05.930792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.930826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.930849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.930876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.930894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.930908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.930926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.930941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.930955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.930986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.931006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.940615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.940672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.940774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.940806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.441 [2024-10-01 13:44:05.940825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.940875] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.441 [2024-10-01 13:44:05.940899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.441 [2024-10-01 13:44:05.940916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.441 [2024-10-01 13:44:05.940949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.940972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.441 [2024-10-01 13:44:05.940999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.941017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.941032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.941049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.441 [2024-10-01 13:44:05.941064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.441 [2024-10-01 13:44:05.941094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.441 [2024-10-01 13:44:05.941360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.941388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.441 [2024-10-01 13:44:05.951259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.951312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.441 [2024-10-01 13:44:05.951412] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:05.951444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.442 [2024-10-01 13:44:05.951462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:05.951512] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:05.951550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.442 [2024-10-01 13:44:05.951571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:05.951605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:05.951634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:05.952738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:05.952779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:05.952797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:05.952815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:05.952831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:05.952844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:05.953066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:05.953093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:05.962121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:05.962180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:05.962287] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:05.962320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.442 [2024-10-01 13:44:05.962338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:05.962387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:05.962412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.442 [2024-10-01 13:44:05.962429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:05.962463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:05.962486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:05.962555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:05.962578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:05.962593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:05.962610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:05.962625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:05.962639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:05.962672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:05.962692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:05.973123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:05.973183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:05.973290] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:05.973323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.442 [2024-10-01 13:44:05.973341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:05.973391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:05.973416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.442 [2024-10-01 13:44:05.973432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:05.973467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:05.973490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:05.973517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:05.973549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:05.973568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:05.973586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:05.973601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:05.973614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:05.973647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:05.973667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:05.983261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:05.983337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:05.983419] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:05.983449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.442 [2024-10-01 13:44:05.983467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:05.983580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:05.983610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.442 [2024-10-01 13:44:05.983626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:05.983645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:05.983933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:05.983970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:05.983987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:05.984002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:05.984147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:05.984173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:05.984188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:05.984203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:05.984317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:05.993913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:05.993991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:05.994112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:05.994146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.442 [2024-10-01 13:44:05.994165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:05.994216] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:05.994241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.442 [2024-10-01 13:44:05.994258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:05.995416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:05.995466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:05.995709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:05.995748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:05.995768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:05.995787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:05.995803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:05.995817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:05.996903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:05.996965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:06.004771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:06.004822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:06.004923] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:06.004956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.442 [2024-10-01 13:44:06.004974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:06.005024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:06.005049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.442 [2024-10-01 13:44:06.005065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:06.005099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:06.005123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:06.005150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:06.005168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:06.005183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:06.005200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:06.005215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:06.005228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:06.005260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:06.005279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:06.015610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:06.015665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:06.015765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:06.015798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.442 [2024-10-01 13:44:06.015816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:06.015866] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:06.015903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.442 [2024-10-01 13:44:06.015921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:06.015955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:06.015978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:06.016005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:06.016042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:06.016058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:06.016075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:06.016091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:06.016105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:06.016138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:06.016158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:06.025742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:06.025817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:06.025901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:06.025933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.442 [2024-10-01 13:44:06.025951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:06.026018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:06.026046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.442 [2024-10-01 13:44:06.026062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:06.026081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:06.026346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:06.026387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:06.026405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:06.026419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:06.026582] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:06.026610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:06.026625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:06.026640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:06.026749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:06.036296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:06.036344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:06.036444] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:06.036481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.442 [2024-10-01 13:44:06.036501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:06.036568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:06.036613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.442 [2024-10-01 13:44:06.036632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:06.037725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:06.037770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.442 [2024-10-01 13:44:06.038000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:06.038038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:06.038056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:06.038074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.442 [2024-10-01 13:44:06.038089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.442 [2024-10-01 13:44:06.038103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.442 [2024-10-01 13:44:06.039171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:06.039208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.442 [2024-10-01 13:44:06.047088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:06.047137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.442 [2024-10-01 13:44:06.047236] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:06.047270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.442 [2024-10-01 13:44:06.047288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.442 [2024-10-01 13:44:06.047338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.442 [2024-10-01 13:44:06.047363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.442 [2024-10-01 13:44:06.047379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.047413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.047436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.047463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.047481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.047495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.047511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.047526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.047559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.047595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.047615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.057985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.058036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.058133] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.058165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.443 [2024-10-01 13:44:06.058183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.058233] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.058258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.443 [2024-10-01 13:44:06.058274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.058307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.058330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.058357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.058376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.058390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.058407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.058423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.058436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.058468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.058487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.068116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.068191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.068273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.068303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.443 [2024-10-01 13:44:06.068321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.068388] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.068416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.443 [2024-10-01 13:44:06.068432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.068451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.068732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.068773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.068791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.068825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.068973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.069000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.069015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.069030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.069140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.078827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.078899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.079021] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.079056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.443 [2024-10-01 13:44:06.079075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.079126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.079151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.443 [2024-10-01 13:44:06.079168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.080283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.080329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.080581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.080619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.080638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.080657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.080673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.080686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.081758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.081796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.089702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.089751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.089848] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.089880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.443 [2024-10-01 13:44:06.089898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.089948] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.089973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.443 [2024-10-01 13:44:06.090015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.090051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.090075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.090103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.090121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.090135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.090152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.090168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.090181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.090213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.090232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.100600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.100675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.100793] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.100826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.443 [2024-10-01 13:44:06.100845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.100895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.100920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.443 [2024-10-01 13:44:06.100937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.100972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.100996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.101023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.101040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.101056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.101073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.101088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.101102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.101134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.101154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.110760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.110867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.110954] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.110986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.443 [2024-10-01 13:44:06.111004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.111071] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.111099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.443 [2024-10-01 13:44:06.111116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.111135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.111399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.111439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.111456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.111471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.111634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.111662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.111677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.111691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.111801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.121318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.121373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.121474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.121505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.443 [2024-10-01 13:44:06.121523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.121591] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.121618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.443 [2024-10-01 13:44:06.121634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.121668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.121691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.122776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.122816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.122834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.122872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.122890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.122904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.123125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.123164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.132129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.132178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.132275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.132307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.443 [2024-10-01 13:44:06.132324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.132374] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.132399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.443 [2024-10-01 13:44:06.132416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.132449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.132472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.132499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.132517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.132532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.132568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.132584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.132598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.132631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.132650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.143037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.143092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.143205] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.143240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.443 [2024-10-01 13:44:06.143259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.143310] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.443 [2024-10-01 13:44:06.143335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.443 [2024-10-01 13:44:06.143351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.443 [2024-10-01 13:44:06.143407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.143432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.443 [2024-10-01 13:44:06.143460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.143477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.143492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.143509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.443 [2024-10-01 13:44:06.143525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.443 [2024-10-01 13:44:06.143555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.443 [2024-10-01 13:44:06.143591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.143611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.443 [2024-10-01 13:44:06.153186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.153266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.443 [2024-10-01 13:44:06.153351] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.153392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.444 [2024-10-01 13:44:06.153413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.153735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.153777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.444 [2024-10-01 13:44:06.153797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.153817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.153963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.153993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.154009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.154023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.154139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.154169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.154185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.154200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.154242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.163610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.163662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.163789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.163833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.444 [2024-10-01 13:44:06.163854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.163923] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.163951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.444 [2024-10-01 13:44:06.163968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.165073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.165123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.165381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.165421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.165439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.165458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.165474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.165488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.166586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.166625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.174595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.174658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.174785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.174819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.444 [2024-10-01 13:44:06.174838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.174889] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.174914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.444 [2024-10-01 13:44:06.174931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.174966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.174989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.175016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.175034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.175050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.175068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.175107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.175125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.175174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.175197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.185632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.185692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.185796] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.185836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.444 [2024-10-01 13:44:06.185856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.185909] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.185934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.444 [2024-10-01 13:44:06.185951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.185986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.186010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.186037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.186054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.186069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.186086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.186101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.186115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.186147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.186166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.195775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.195855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.195956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.195990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.444 [2024-10-01 13:44:06.196009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.196326] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.196370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.444 [2024-10-01 13:44:06.196390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.196411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.196604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.196639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.196655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.196670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.196783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.196806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.196820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.196834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.196873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.206194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.206263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.206378] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.206412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.444 [2024-10-01 13:44:06.206431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.206482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.206507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.444 [2024-10-01 13:44:06.206524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.207662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.207709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.207971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.208010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.208030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.208049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.208064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.208079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.209185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.209227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.217181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.217235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.217351] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.217396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.444 [2024-10-01 13:44:06.217448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.217505] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.217531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.444 [2024-10-01 13:44:06.217569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.217606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.217630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.217657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.217675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.217690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.217707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.217723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.217737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.217769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.217789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.228254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.228331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.228453] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.228488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.444 [2024-10-01 13:44:06.228507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.228575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.228603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.444 [2024-10-01 13:44:06.228620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.228656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.228680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.228708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.228726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.228741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.228759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.228774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.228813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.229592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.229633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.238418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.238495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.238594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.238626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.444 [2024-10-01 13:44:06.238644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.238949] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.238991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.444 [2024-10-01 13:44:06.239011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.444 [2024-10-01 13:44:06.239031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.239186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.444 [2024-10-01 13:44:06.239228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.239246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.239260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.239373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.239396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.444 [2024-10-01 13:44:06.239410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.444 [2024-10-01 13:44:06.239425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.444 [2024-10-01 13:44:06.239464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.444 [2024-10-01 13:44:06.248738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.248790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.444 [2024-10-01 13:44:06.248890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.444 [2024-10-01 13:44:06.248922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.445 [2024-10-01 13:44:06.248940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.248991] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.249016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.445 [2024-10-01 13:44:06.249032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.250140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.250189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.250444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.250484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.250502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.250521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.250552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.250568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.251648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.251687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.259560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.259611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.259711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.259744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.445 [2024-10-01 13:44:06.259762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.259813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.259838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.445 [2024-10-01 13:44:06.259855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.259901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.259928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.259955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.259973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.259987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.260004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.260020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.260033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.260065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.260085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.270416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.270470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.270584] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.270618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.445 [2024-10-01 13:44:06.270637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.270713] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.270739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.445 [2024-10-01 13:44:06.270756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.270791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.270815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.270842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.270860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.270874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.270892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.270908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.270921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.270954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.270974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.280568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.280650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.280747] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.280784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.445 [2024-10-01 13:44:06.280804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.280873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.280902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.445 [2024-10-01 13:44:06.280918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.280938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.281226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.281269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.281287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.281303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.281436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.281461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.281476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.281506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.281637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.291321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.291377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.291481] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.291513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.445 [2024-10-01 13:44:06.291532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.291604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.291630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.445 [2024-10-01 13:44:06.291647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.291682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.291706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.292819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.292862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.292881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.292901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.292916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.292929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.293176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.293217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.302283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.302340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.302448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.302481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.445 [2024-10-01 13:44:06.302500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.302566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.302594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.445 [2024-10-01 13:44:06.302611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.302647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.302671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.302698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.302745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.302762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.302780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.302795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.302808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.302841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.302862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.313431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.313487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.313617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.313665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.445 [2024-10-01 13:44:06.313685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.313738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.313763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.445 [2024-10-01 13:44:06.313779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.313814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.313837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.313864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.313882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.313897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.313915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.313930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.313944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.313975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.313995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.324007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.324063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.324189] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.324235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.445 [2024-10-01 13:44:06.324256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.324309] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.324346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.445 [2024-10-01 13:44:06.324376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.324413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.324437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.324464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.324482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.324496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.324514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.324528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.324559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.324827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.324854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.335517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.335590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.335843] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.335902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.445 [2024-10-01 13:44:06.335925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.335980] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.336006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.445 [2024-10-01 13:44:06.336033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.337158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.337206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.337435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.337473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.337492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.337512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.337528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.337557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.337602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.337625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.345683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.345801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.445 [2024-10-01 13:44:06.345914] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.345949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.445 [2024-10-01 13:44:06.345975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.346046] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.445 [2024-10-01 13:44:06.346074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.445 [2024-10-01 13:44:06.346090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.445 [2024-10-01 13:44:06.346111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.346159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.445 [2024-10-01 13:44:06.346189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.346205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.346221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.346255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.346276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.445 [2024-10-01 13:44:06.346290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.445 [2024-10-01 13:44:06.346304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.445 [2024-10-01 13:44:06.347261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.445 [2024-10-01 13:44:06.355816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.355951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.355996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.446 [2024-10-01 13:44:06.356017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.356322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.356484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.356531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.356568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.356583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.356700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.356775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.356805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.446 [2024-10-01 13:44:06.356847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.356892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.356927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.356945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.356959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.356991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.367071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.367125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.368016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.368064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.446 [2024-10-01 13:44:06.368085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.368148] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.368183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.446 [2024-10-01 13:44:06.368201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.368386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.368428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.368505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.368526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.368557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.368577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.368593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.368606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.368641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.368661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.377222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.377301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.377387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.377417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.446 [2024-10-01 13:44:06.377441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.378243] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.378290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.446 [2024-10-01 13:44:06.378334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.378357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.378565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.378604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.378622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.378636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.378681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.378703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.378717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.378731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.379666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.387695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.387759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.387872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.387956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.446 [2024-10-01 13:44:06.387976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.388032] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.388057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.446 [2024-10-01 13:44:06.388074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.388108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.388139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.388180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.388200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.388214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.388232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.388247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.388261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.388293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.388313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.397846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.399016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.399131] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.399184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.446 [2024-10-01 13:44:06.399207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.399470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.399513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.446 [2024-10-01 13:44:06.399547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.399571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.400699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.400744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.400763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.400778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.401411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.401452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.401471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.401486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.401831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.408481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.408621] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.408665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.446 [2024-10-01 13:44:06.408685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.408720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.408753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.408770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.408785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.408817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.409105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.409217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.409255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.446 [2024-10-01 13:44:06.409275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.409325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.409358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.409376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.409390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.409422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.419608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.419700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.419834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.419870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.446 [2024-10-01 13:44:06.419906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.419963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.419989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.446 [2024-10-01 13:44:06.420006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.420042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.420066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.420093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.420112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.420133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.420163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.420181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.420195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.420231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.420251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.430063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.430161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.430298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.430334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.446 [2024-10-01 13:44:06.430353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.430404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.430429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.446 [2024-10-01 13:44:06.430446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.430768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.430813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.430980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.431017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.431037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.431055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.431070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.431084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.431207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.431235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.440751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.440804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.440904] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.440937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.446 [2024-10-01 13:44:06.440955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.441005] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.441030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.446 [2024-10-01 13:44:06.441046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.441080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.441104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.442209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.442253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.442272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.442290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.442306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.442319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.442563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.442594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.446 [2024-10-01 13:44:06.451581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.451632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.446 [2024-10-01 13:44:06.451754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.451798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.446 [2024-10-01 13:44:06.451819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.451871] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.446 [2024-10-01 13:44:06.451912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.446 [2024-10-01 13:44:06.451930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.446 [2024-10-01 13:44:06.451965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.451989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.446 [2024-10-01 13:44:06.452016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.452034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.452048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.446 [2024-10-01 13:44:06.452065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.446 [2024-10-01 13:44:06.452080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.446 [2024-10-01 13:44:06.452094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.452128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.452159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.462631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.462686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.462789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.462822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.447 [2024-10-01 13:44:06.462840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.462891] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.462916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.447 [2024-10-01 13:44:06.462932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.462966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.462990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.463016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.463034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.463049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.463066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.463101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.463117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.463166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.463190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.472773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.472828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.472928] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.472961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.447 [2024-10-01 13:44:06.472979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.473030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.473055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.447 [2024-10-01 13:44:06.473071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.473105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.473133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.473178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.473199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.473213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.473231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.473246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.473259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.473305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.473328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.483734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.483787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.483903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.483939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.447 [2024-10-01 13:44:06.483957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.484009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.484034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.447 [2024-10-01 13:44:06.484050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.485163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.485233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.485449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.485487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.485505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.485524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.485554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.485569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.486663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.486702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.493868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.494867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.494974] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.495012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.447 [2024-10-01 13:44:06.495032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.495278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.495322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.447 [2024-10-01 13:44:06.495342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.495361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.495414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.495438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.495453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.495468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.495503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.495524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.495557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.495574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.495606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.503967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.505400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.505448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.447 [2024-10-01 13:44:06.505488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.506456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.506616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.506667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.506687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.506702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.506737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.506804] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.506833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.447 [2024-10-01 13:44:06.506851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.506885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.506917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.506934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.506948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.506979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.514879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.515008] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.515051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.447 [2024-10-01 13:44:06.515072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.516206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.516936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.516978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.516997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.517105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.517160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.517525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.517583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.447 [2024-10-01 13:44:06.517604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.517751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.517898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.517951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.517969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.518013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.524984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.525106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.525179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.447 [2024-10-01 13:44:06.525204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.525239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.525272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.525290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.525305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.525338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.527729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.527849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.527903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.447 [2024-10-01 13:44:06.527926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.527962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.527994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.528012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.528033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.528064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.535583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.535707] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.535743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.447 [2024-10-01 13:44:06.535762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.535796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.535828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.535846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.535861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.535907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.538727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.539030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.539076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.447 [2024-10-01 13:44:06.539097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.539148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.539194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.539214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.539229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.539261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.545844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.545969] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.546004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.447 [2024-10-01 13:44:06.546024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.546058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.546091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.546109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.546126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.546173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.549986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.550107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.447 [2024-10-01 13:44:06.550151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.447 [2024-10-01 13:44:06.550179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.447 [2024-10-01 13:44:06.550215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.447 [2024-10-01 13:44:06.550248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.447 [2024-10-01 13:44:06.550266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.447 [2024-10-01 13:44:06.550280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.447 [2024-10-01 13:44:06.550312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.447 [2024-10-01 13:44:06.556484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.447 [2024-10-01 13:44:06.556622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.556665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.448 [2024-10-01 13:44:06.556685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.556739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.556773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.556791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.556805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.556838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.560082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.560214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.560258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.448 [2024-10-01 13:44:06.560279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.560314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.560347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.560366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.560381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.560660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.567363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.567485] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.567521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.448 [2024-10-01 13:44:06.567557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.567595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.567628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.567646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.567660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.567693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.570634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.570754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.570809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.448 [2024-10-01 13:44:06.570830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.570864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.570896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.570913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.570942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.572064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.578308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.578431] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.578474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.448 [2024-10-01 13:44:06.578496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.578530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.578581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.578600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.578614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.578647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.581521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.581653] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.581692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.448 [2024-10-01 13:44:06.581712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.581746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.581779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.581796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.581811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.581842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.588412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.588548] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.588598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.448 [2024-10-01 13:44:06.588619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.588653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.588686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.588703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.588717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.588750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.592551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.592671] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.592732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.448 [2024-10-01 13:44:06.592754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.592789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.592822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.592840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.592855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.592887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.599282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.599454] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.599489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.448 [2024-10-01 13:44:06.599508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.599562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.599599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.599617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.599632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.600766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.602862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.602980] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.603023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.448 [2024-10-01 13:44:06.603043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.603077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.603110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.603133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.603160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.603197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.610319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.610451] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.610485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.448 [2024-10-01 13:44:06.610504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.610553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.610620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.610640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.610655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.610693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.613624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.613755] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.613788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.448 [2024-10-01 13:44:06.613807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.613840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.613872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.613890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.613904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.613936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.621393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.621520] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.621570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.448 [2024-10-01 13:44:06.621590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.621625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.621658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.621676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.621690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.621723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.624599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.624720] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.624754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.448 [2024-10-01 13:44:06.624774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.624808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.624841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.624859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.624874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.624906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.631497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.631639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.631679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.448 [2024-10-01 13:44:06.631699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.631733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.631765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.631783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.631798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.632077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.635569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.635692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.635728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.448 [2024-10-01 13:44:06.635746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.635780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.635813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.635831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.635845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.635876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.642079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.642214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.642269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.448 [2024-10-01 13:44:06.642290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.642325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.642358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.642376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.642391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.642424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.645669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.645795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.645835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.448 [2024-10-01 13:44:06.645881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.645917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.645951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.645969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.645983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.646254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.652904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.653026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.653061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.448 [2024-10-01 13:44:06.653079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.653114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.653162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.653184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.653199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.448 [2024-10-01 13:44:06.653231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.448 [2024-10-01 13:44:06.656215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.448 [2024-10-01 13:44:06.656336] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.448 [2024-10-01 13:44:06.656369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.448 [2024-10-01 13:44:06.656387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.448 [2024-10-01 13:44:06.656421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.448 [2024-10-01 13:44:06.656453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.448 [2024-10-01 13:44:06.656471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.448 [2024-10-01 13:44:06.656486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.656519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.663834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.663968] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.664013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.449 [2024-10-01 13:44:06.664033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.664068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.664101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.664161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.664180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.664216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 8661.36 IOPS, 33.83 MiB/s [2024-10-01 13:44:06.669939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.670999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.671047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.449 [2024-10-01 13:44:06.671069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.671274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.672437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.672478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.672497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.673769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.674925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.675053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.675095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.449 [2024-10-01 13:44:06.675116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.675182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.675222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.675240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.675254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.675287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.680035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.681072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.681119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.449 [2024-10-01 13:44:06.681152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.681358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.681418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.681441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.681456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.681490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.686200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.687387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.687435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.449 [2024-10-01 13:44:06.687456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.688097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.688225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.688261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.688279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.688315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.692239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.692369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.692404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.449 [2024-10-01 13:44:06.692423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.692458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.692491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.692509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.692524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.692574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.696318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.696470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.696513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.449 [2024-10-01 13:44:06.696547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.697838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.698096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.698138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.698165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.698962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.702507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.702646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.702691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.449 [2024-10-01 13:44:06.702712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.702778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.702812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.702830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.702844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.703107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.706629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.706751] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.706794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.449 [2024-10-01 13:44:06.706815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.706850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.706882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.706900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.706914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.706947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.713141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.713268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.713311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.449 [2024-10-01 13:44:06.713331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.713365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.713398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.713416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.713430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.713462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.716732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.716849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.716894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.449 [2024-10-01 13:44:06.716915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.716949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.716981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.716998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.717029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.717065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.723238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.723358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.723404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.449 [2024-10-01 13:44:06.723424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.724389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.724629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.724666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.724684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.724728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.727769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.727948] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.727992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.449 [2024-10-01 13:44:06.728013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.728047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.728080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.728097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.728112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.728154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.735466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.735604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.735647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.449 [2024-10-01 13:44:06.735668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.735702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.735735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.735753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.735767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.735799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.738713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.738857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.738919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.449 [2024-10-01 13:44:06.738957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.738994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.739028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.739045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.739060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.739093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.745592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.745715] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.745749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.449 [2024-10-01 13:44:06.745767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.745801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.745833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.745850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.745865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.745897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.749886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.750010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.750054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.449 [2024-10-01 13:44:06.750075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.750110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.750155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.750179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.750193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.750227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.756365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.756489] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.756548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.449 [2024-10-01 13:44:06.756572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.449 [2024-10-01 13:44:06.756608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.449 [2024-10-01 13:44:06.756662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.449 [2024-10-01 13:44:06.756682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.449 [2024-10-01 13:44:06.756696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.449 [2024-10-01 13:44:06.757800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.449 [2024-10-01 13:44:06.759988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.449 [2024-10-01 13:44:06.760107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.449 [2024-10-01 13:44:06.760158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.449 [2024-10-01 13:44:06.760182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.760217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.760487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.760523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.760557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.760705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.767256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.767376] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.767419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.450 [2024-10-01 13:44:06.767440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.767473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.767506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.767523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.767554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.767591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.770566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.770686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.770734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.450 [2024-10-01 13:44:06.770754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.770788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.770821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.770838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.770853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.770902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.778181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.778304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.778347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.450 [2024-10-01 13:44:06.778368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.778402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.778435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.778453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.778467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.778499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.781431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.781565] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.781608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.450 [2024-10-01 13:44:06.781628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.781663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.781696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.781713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.781727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.781759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.788415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.788590] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.788631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.450 [2024-10-01 13:44:06.788651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.788688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.788722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.788741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.788755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.788789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.792618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.792743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.792785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.450 [2024-10-01 13:44:06.792837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.792875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.792908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.792926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.792940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.792973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.799218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.799341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.799387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.450 [2024-10-01 13:44:06.799408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.799442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.799475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.799493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.799507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.799557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.802777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.802897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.802940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.450 [2024-10-01 13:44:06.802960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.802995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.803027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.803045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.803059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.803091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.810021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.810150] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.810197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.450 [2024-10-01 13:44:06.810218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.810253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.810286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.810322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.810338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.810372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.813341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.813462] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.813506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.450 [2024-10-01 13:44:06.813526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.813577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.813612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.813630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.813644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.814744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.820920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.821040] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.821083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.450 [2024-10-01 13:44:06.821103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.821143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.821186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.821205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.821220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.821252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.824152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.824275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.824318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.450 [2024-10-01 13:44:06.824339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.824373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.824406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.824424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.824459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.824496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.831015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.831146] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.831194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.450 [2024-10-01 13:44:06.831215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.831251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.831283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.831301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.831316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.831596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.835031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.835161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.835195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.450 [2024-10-01 13:44:06.835214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.835249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.835281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.835299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.835314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.835347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.841583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.841704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.841746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.450 [2024-10-01 13:44:06.841766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.841801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.841834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.841852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.841866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.841898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.845137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.845277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.845321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.450 [2024-10-01 13:44:06.845342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.845409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.845444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.845461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.845476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.845509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.852675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.852858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.852895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.450 [2024-10-01 13:44:06.852914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.852952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.852986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.853004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.853020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.853053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.856004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.856130] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.856182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.450 [2024-10-01 13:44:06.856204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.856240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.856272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.856289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.856304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.856336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.863676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.450 [2024-10-01 13:44:06.863799] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.450 [2024-10-01 13:44:06.863843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.450 [2024-10-01 13:44:06.863864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.450 [2024-10-01 13:44:06.863910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.450 [2024-10-01 13:44:06.863945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.450 [2024-10-01 13:44:06.863964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.450 [2024-10-01 13:44:06.864010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.450 [2024-10-01 13:44:06.864045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.450 [2024-10-01 13:44:06.866913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.867031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.867073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.451 [2024-10-01 13:44:06.867094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.867130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.867179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.867199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.867213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.867245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.873793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.873916] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.873959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.451 [2024-10-01 13:44:06.873980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.874014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.874047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.874065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.874080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.874111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.877990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.878131] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.878183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.451 [2024-10-01 13:44:06.878205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.878241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.878275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.878292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.878306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.878339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.884351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.884473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.884548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.451 [2024-10-01 13:44:06.884573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.884610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.884643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.884662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.884676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.884708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.888087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.888214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.888259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.451 [2024-10-01 13:44:06.888279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.888561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.888725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.888759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.888776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.888887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.895129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.895256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.895300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.451 [2024-10-01 13:44:06.895320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.895354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.895387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.895405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.895420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.895452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.898429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.898565] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.898608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.451 [2024-10-01 13:44:06.898628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.898664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.898717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.898737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.898751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.899853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.906049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.906179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.906224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.451 [2024-10-01 13:44:06.906245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.906280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.906313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.906332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.906346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.906379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.909327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.909446] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.909492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.451 [2024-10-01 13:44:06.909512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.909562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.909598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.909616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.909631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.909663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.916158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.916278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.916321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.451 [2024-10-01 13:44:06.916342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.916376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.916408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.916426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.916441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.916755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.920240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.920360] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.920400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.451 [2024-10-01 13:44:06.920420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.920454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.920486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.920504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.920518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.920566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.926690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.926810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.926854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.451 [2024-10-01 13:44:06.926874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.926909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.926941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.926959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.926974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.927005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.930341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.930461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.930501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.451 [2024-10-01 13:44:06.930521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.930571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.930607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.930625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.930640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.930912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.937507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.937642] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.937681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.451 [2024-10-01 13:44:06.937719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.937756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.937789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.937807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.937821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.937853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.940814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.940933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.940976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.451 [2024-10-01 13:44:06.940996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.941030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.941063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.941081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.941095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.941129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.948400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.948522] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.948578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.451 [2024-10-01 13:44:06.948600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.948635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.948668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.948686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.948701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.948732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.951650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.951767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.951810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.451 [2024-10-01 13:44:06.951830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.951864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.951911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.951949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.951964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.951998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.958516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.958654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.958697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.451 [2024-10-01 13:44:06.958718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.958758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.958798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.958818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.958832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.959114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.962642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.451 [2024-10-01 13:44:06.962759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.451 [2024-10-01 13:44:06.962793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.451 [2024-10-01 13:44:06.962812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.451 [2024-10-01 13:44:06.962846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.451 [2024-10-01 13:44:06.962878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.451 [2024-10-01 13:44:06.962904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.451 [2024-10-01 13:44:06.962918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.451 [2024-10-01 13:44:06.962958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.451 [2024-10-01 13:44:06.969228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:06.969346] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:06.969380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.452 [2024-10-01 13:44:06.969398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:06.969432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:06.969464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:06.969482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:06.969497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:06.969529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:06.973029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:06.973161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:06.973197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.452 [2024-10-01 13:44:06.973216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:06.973251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:06.973284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:06.973303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:06.973317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:06.973350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:06.979625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:06.979747] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:06.979781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.452 [2024-10-01 13:44:06.979799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:06.979833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:06.979866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:06.979898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:06.979915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:06.979948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:06.983135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:06.983263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:06.983309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.452 [2024-10-01 13:44:06.983329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:06.983364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:06.983396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:06.983414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:06.983428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:06.983461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:06.989725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:06.989845] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:06.989879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.452 [2024-10-01 13:44:06.989897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:06.989951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:06.989985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:06.990003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:06.990017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:06.990290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:06.993813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:06.993932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:06.993971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.452 [2024-10-01 13:44:06.993992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:06.994027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:06.994060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:06.994078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:06.994093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:06.994127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:07.000296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:07.000417] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:07.000450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.452 [2024-10-01 13:44:07.000469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:07.000502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:07.000549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:07.000571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:07.000586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:07.000619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:07.003922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:07.004053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:07.004087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.452 [2024-10-01 13:44:07.004106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:07.004145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:07.004187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:07.004206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:07.004236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:07.004511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:07.011334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:07.011458] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:07.011493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.452 [2024-10-01 13:44:07.011512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:07.011562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:07.011599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:07.011618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:07.011632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:07.011664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:07.014613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:07.014748] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:07.014793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.452 [2024-10-01 13:44:07.014818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:07.014854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:07.016252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:07.016304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:07.016328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:07.016640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:07.022368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:07.022497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:07.022532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.452 [2024-10-01 13:44:07.022574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:07.022610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:07.022651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:07.022671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:07.022685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:07.022718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:07.025563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:07.025738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:07.025809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.452 [2024-10-01 13:44:07.025833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:07.025873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:07.025909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:07.025927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:07.025941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:07.025975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:07.032468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:07.032607] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:07.032643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.452 [2024-10-01 13:44:07.032662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:07.032927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:07.033094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:07.033131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:07.033161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:07.033277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:07.036310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:07.036430] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:07.036469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.452 [2024-10-01 13:44:07.036489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:07.036524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:07.036575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:07.036594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:07.036608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:07.036642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:07.042795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:07.042917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:07.042951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.452 [2024-10-01 13:44:07.042970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:07.043004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:07.043057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:07.043077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:07.043091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:07.043125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:07.046408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:07.046529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:07.046582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.452 [2024-10-01 13:44:07.046602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:07.046637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:07.046669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:07.046687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:07.046702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:07.046965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:07.053615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:07.053735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:07.053779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.452 [2024-10-01 13:44:07.053800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:07.053834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:07.053866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:07.053884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:07.053898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:07.053930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:07.056933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:07.057056] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:07.057089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.452 [2024-10-01 13:44:07.057108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:07.057150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:07.057192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:07.057210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:07.057225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:07.057277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:07.064560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:07.064680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:07.064720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.452 [2024-10-01 13:44:07.064740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.452 [2024-10-01 13:44:07.064774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.452 [2024-10-01 13:44:07.064806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.452 [2024-10-01 13:44:07.064824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.452 [2024-10-01 13:44:07.064839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.452 [2024-10-01 13:44:07.064871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.452 [2024-10-01 13:44:07.067751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.452 [2024-10-01 13:44:07.067869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.452 [2024-10-01 13:44:07.067919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.452 [2024-10-01 13:44:07.067940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.067975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.068007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.068024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.068039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.068071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.074662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.074782] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.074815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.453 [2024-10-01 13:44:07.074834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.074867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.074900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.074918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.074932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.075201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.078639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.078758] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.078800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.453 [2024-10-01 13:44:07.078838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.078876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.078910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.078928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.078942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.078975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.085129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.085258] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.085293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.453 [2024-10-01 13:44:07.085312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.085345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.085384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.085401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.085416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.085449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.088737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.088857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.088896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.453 [2024-10-01 13:44:07.088916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.088950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.088983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.089001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.089015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.089287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.095939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.096058] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.096093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.453 [2024-10-01 13:44:07.096112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.096162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.096200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.096235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.096251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.096285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.099230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.099350] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.099384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.453 [2024-10-01 13:44:07.099403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.099437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.099469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.099487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.099502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.099548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.106868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.106990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.107029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.453 [2024-10-01 13:44:07.107049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.107083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.107115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.107143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.107167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.107203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.110054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.110182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.110226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.453 [2024-10-01 13:44:07.110246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.110281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.110314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.110331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.110346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.110378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.116964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.117086] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.117122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.453 [2024-10-01 13:44:07.117153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.117197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.117230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.117248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.117263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.117295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.121015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.121141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.121194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.453 [2024-10-01 13:44:07.121216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.121251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.121285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.121303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.121318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.121351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.127495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.127627] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.127673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.453 [2024-10-01 13:44:07.127694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.127729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.127761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.127779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.127793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.127825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.131114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.131245] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.131281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.453 [2024-10-01 13:44:07.131299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.131353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.131387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.131405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.131419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.131699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.138300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.138418] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.138451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.453 [2024-10-01 13:44:07.138470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.138503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.138551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.138573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.138588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.138621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.141602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.141721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.141754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.453 [2024-10-01 13:44:07.141772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.141806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.141839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.141856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.141870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.141903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.149198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.149320] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.149364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.453 [2024-10-01 13:44:07.149385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.149419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.149452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.149470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.149502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.149552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.152356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.152477] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.152517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.453 [2024-10-01 13:44:07.152550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.152588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.152622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.152640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.152655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.152687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.159299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.159420] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.159463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.453 [2024-10-01 13:44:07.159484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.159518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.159565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.159586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.159601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.159863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.163292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.163412] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.163446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.453 [2024-10-01 13:44:07.163465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.163498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.163530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.163564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.163579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.163613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.453 [2024-10-01 13:44:07.169779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.453 [2024-10-01 13:44:07.169917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.453 [2024-10-01 13:44:07.169961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.453 [2024-10-01 13:44:07.169981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.453 [2024-10-01 13:44:07.170016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.453 [2024-10-01 13:44:07.170048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.453 [2024-10-01 13:44:07.170067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.453 [2024-10-01 13:44:07.170081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.453 [2024-10-01 13:44:07.170114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.454 [2024-10-01 13:44:07.173390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.454 [2024-10-01 13:44:07.173510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.454 [2024-10-01 13:44:07.173560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.454 [2024-10-01 13:44:07.173582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.454 [2024-10-01 13:44:07.173617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.454 [2024-10-01 13:44:07.173649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.454 [2024-10-01 13:44:07.173667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.454 [2024-10-01 13:44:07.173681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.454 [2024-10-01 13:44:07.173943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.454 [2024-10-01 13:44:07.180633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.454 [2024-10-01 13:44:07.180752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.454 [2024-10-01 13:44:07.180785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.454 [2024-10-01 13:44:07.180804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.454 [2024-10-01 13:44:07.180837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.454 [2024-10-01 13:44:07.180869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.454 [2024-10-01 13:44:07.180887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.454 [2024-10-01 13:44:07.180901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.454 [2024-10-01 13:44:07.180934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.454 [2024-10-01 13:44:07.183894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.454 [2024-10-01 13:44:07.184015] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.454 [2024-10-01 13:44:07.184050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.454 [2024-10-01 13:44:07.184069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.454 [2024-10-01 13:44:07.184103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.454 [2024-10-01 13:44:07.184175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.454 [2024-10-01 13:44:07.184198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.454 [2024-10-01 13:44:07.184213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.454 [2024-10-01 13:44:07.184246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.454 [2024-10-01 13:44:07.191481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.454 [2024-10-01 13:44:07.191617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.454 [2024-10-01 13:44:07.191661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.454 [2024-10-01 13:44:07.191682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.454 [2024-10-01 13:44:07.191716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.454 [2024-10-01 13:44:07.191749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.454 [2024-10-01 13:44:07.191767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.454 [2024-10-01 13:44:07.191781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.454 [2024-10-01 13:44:07.191813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.454 [2024-10-01 13:44:07.194719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.454 [2024-10-01 13:44:07.194838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.454 [2024-10-01 13:44:07.194876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.454 [2024-10-01 13:44:07.194896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.454 [2024-10-01 13:44:07.194930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.454 [2024-10-01 13:44:07.194963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.454 [2024-10-01 13:44:07.194980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.454 [2024-10-01 13:44:07.194995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.454 [2024-10-01 13:44:07.195027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.454 [2024-10-01 13:44:07.201597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.454 [2024-10-01 13:44:07.201717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.454 [2024-10-01 13:44:07.201751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.454 [2024-10-01 13:44:07.201770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.454 [2024-10-01 13:44:07.201804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.454 [2024-10-01 13:44:07.201837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.454 [2024-10-01 13:44:07.201855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.454 [2024-10-01 13:44:07.201870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.454 [2024-10-01 13:44:07.202156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.454 [2024-10-01 13:44:07.205606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.454 [2024-10-01 13:44:07.205726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.454 [2024-10-01 13:44:07.205762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.454 [2024-10-01 13:44:07.205780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.454 [2024-10-01 13:44:07.205814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.454 [2024-10-01 13:44:07.205847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.454 [2024-10-01 13:44:07.205864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.454 [2024-10-01 13:44:07.205879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.454 [2024-10-01 13:44:07.205911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.454 [2024-10-01 13:44:07.212064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.454 [2024-10-01 13:44:07.212197] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.454 [2024-10-01 13:44:07.212233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.454 [2024-10-01 13:44:07.212253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.454 [2024-10-01 13:44:07.212287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.454 [2024-10-01 13:44:07.212320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.454 [2024-10-01 13:44:07.212338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.454 [2024-10-01 13:44:07.212352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.454 [2024-10-01 13:44:07.212384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.454 [2024-10-01 13:44:07.215702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.454 [2024-10-01 13:44:07.215819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.454 [2024-10-01 13:44:07.215862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.454 [2024-10-01 13:44:07.215895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.454 [2024-10-01 13:44:07.215933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.454 [2024-10-01 13:44:07.215966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.454 [2024-10-01 13:44:07.215984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.454 [2024-10-01 13:44:07.215998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.454 [2024-10-01 13:44:07.216270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.454 [2024-10-01 13:44:07.222896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.454 [2024-10-01 13:44:07.223024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.454 [2024-10-01 13:44:07.223073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.454 [2024-10-01 13:44:07.223111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.454 [2024-10-01 13:44:07.223160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.454 [2024-10-01 13:44:07.223198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.454 [2024-10-01 13:44:07.223217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.454 [2024-10-01 13:44:07.223231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.454 [2024-10-01 13:44:07.223264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.454 [2024-10-01 13:44:07.226208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.454 [2024-10-01 13:44:07.226329] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.454 [2024-10-01 13:44:07.226371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.454 [2024-10-01 13:44:07.226392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.454 [2024-10-01 13:44:07.226426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.454 [2024-10-01 13:44:07.226459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.454 [2024-10-01 13:44:07.226477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.454 [2024-10-01 13:44:07.226491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.454 [2024-10-01 13:44:07.226523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.454 [2024-10-01 13:44:07.233823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.454 [2024-10-01 13:44:07.233944] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.454 [2024-10-01 13:44:07.233987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.454 [2024-10-01 13:44:07.234008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.454 [2024-10-01 13:44:07.234042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.454 [2024-10-01 13:44:07.234075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.454 [2024-10-01 13:44:07.234093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.454 [2024-10-01 13:44:07.234108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.454 [2024-10-01 13:44:07.234148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.454 [2024-10-01 13:44:07.237022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.454 [2024-10-01 13:44:07.237148] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.454 [2024-10-01 13:44:07.237193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.454 [2024-10-01 13:44:07.237214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.454 [2024-10-01 13:44:07.237249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.454 [2024-10-01 13:44:07.237282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.455 [2024-10-01 13:44:07.237318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.455 [2024-10-01 13:44:07.237334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.455 [2024-10-01 13:44:07.237368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.455 [2024-10-01 13:44:07.244129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.455 [2024-10-01 13:44:07.244547] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.455 [2024-10-01 13:44:07.244596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.455 [2024-10-01 13:44:07.244627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.455 [2024-10-01 13:44:07.244787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.455 [2024-10-01 13:44:07.244921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.455 [2024-10-01 13:44:07.244952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.455 [2024-10-01 13:44:07.244969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.455 [2024-10-01 13:44:07.245034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.455 [2024-10-01 13:44:07.248089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.455 [2024-10-01 13:44:07.248244] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.455 [2024-10-01 13:44:07.248281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.455 [2024-10-01 13:44:07.248308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.455 [2024-10-01 13:44:07.248344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.455 [2024-10-01 13:44:07.248380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.455 [2024-10-01 13:44:07.248399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.455 [2024-10-01 13:44:07.248413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.455 [2024-10-01 13:44:07.248446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.455 [2024-10-01 13:44:07.254425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.455 [2024-10-01 13:44:07.254564] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.455 [2024-10-01 13:44:07.254599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.455 [2024-10-01 13:44:07.254618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.455 [2024-10-01 13:44:07.254653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.455 [2024-10-01 13:44:07.254687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.455 [2024-10-01 13:44:07.254705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.455 [2024-10-01 13:44:07.254719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.455 [2024-10-01 13:44:07.255819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.455 [2024-10-01 13:44:07.258202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.455 [2024-10-01 13:44:07.258324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.455 [2024-10-01 13:44:07.258358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.455 [2024-10-01 13:44:07.258378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.455 [2024-10-01 13:44:07.258659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.455 [2024-10-01 13:44:07.258849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.455 [2024-10-01 13:44:07.258884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.455 [2024-10-01 13:44:07.258901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.455 [2024-10-01 13:44:07.259014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.455 [2024-10-01 13:44:07.265198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.455 [2024-10-01 13:44:07.265320] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.455 [2024-10-01 13:44:07.265364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.455 [2024-10-01 13:44:07.265384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.455 [2024-10-01 13:44:07.265420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.455 [2024-10-01 13:44:07.265453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.455 [2024-10-01 13:44:07.265470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.455 [2024-10-01 13:44:07.265485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.455 [2024-10-01 13:44:07.265516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.455 [2024-10-01 13:44:07.268498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.455 [2024-10-01 13:44:07.268628] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.455 [2024-10-01 13:44:07.268673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.455 [2024-10-01 13:44:07.268693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.455 [2024-10-01 13:44:07.268727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.455 [2024-10-01 13:44:07.268759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.455 [2024-10-01 13:44:07.268777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.455 [2024-10-01 13:44:07.268791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.455 [2024-10-01 13:44:07.268823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.455 [2024-10-01 13:44:07.276177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.455 [2024-10-01 13:44:07.276300] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.455 [2024-10-01 13:44:07.276344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.455 [2024-10-01 13:44:07.276364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.455 [2024-10-01 13:44:07.276418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.455 [2024-10-01 13:44:07.276476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.455 [2024-10-01 13:44:07.276496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.455 [2024-10-01 13:44:07.276511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.455 [2024-10-01 13:44:07.276558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.455 [2024-10-01 13:44:07.279289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.455 [2024-10-01 13:44:07.279412] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.455 [2024-10-01 13:44:07.279446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.455 [2024-10-01 13:44:07.279465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.455 [2024-10-01 13:44:07.279499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.455 [2024-10-01 13:44:07.279532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.455 [2024-10-01 13:44:07.279568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.455 [2024-10-01 13:44:07.279583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.455 [2024-10-01 13:44:07.279617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.455 [2024-10-01 13:44:07.286279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.455 [2024-10-01 13:44:07.286401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.455 [2024-10-01 13:44:07.286441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.455 [2024-10-01 13:44:07.286461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.455 [2024-10-01 13:44:07.286743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.455 [2024-10-01 13:44:07.286909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.455 [2024-10-01 13:44:07.286944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.455 [2024-10-01 13:44:07.286962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.455 [2024-10-01 13:44:07.287078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.455 [2024-10-01 13:44:07.290182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.455 [2024-10-01 13:44:07.290303] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.455 [2024-10-01 13:44:07.290345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.455 [2024-10-01 13:44:07.290366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.455 [2024-10-01 13:44:07.290399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.455 [2024-10-01 13:44:07.290432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.455 [2024-10-01 13:44:07.290450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.455 [2024-10-01 13:44:07.290483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.456 [2024-10-01 13:44:07.290518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.456 [2024-10-01 13:44:07.296695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.456 [2024-10-01 13:44:07.296818] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.456 [2024-10-01 13:44:07.296878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.456 [2024-10-01 13:44:07.296901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.456 [2024-10-01 13:44:07.296936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.456 [2024-10-01 13:44:07.296969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.456 [2024-10-01 13:44:07.296987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.456 [2024-10-01 13:44:07.297022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.456 [2024-10-01 13:44:07.298186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.456 [2024-10-01 13:44:07.300311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.456 [2024-10-01 13:44:07.300431] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.456 [2024-10-01 13:44:07.300489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.456 [2024-10-01 13:44:07.300512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.456 [2024-10-01 13:44:07.300562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.456 [2024-10-01 13:44:07.300829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.456 [2024-10-01 13:44:07.300866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.456 [2024-10-01 13:44:07.300883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.456 [2024-10-01 13:44:07.301030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.456 [2024-10-01 13:44:07.307437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.456 [2024-10-01 13:44:07.307570] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.456 [2024-10-01 13:44:07.307606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.456 [2024-10-01 13:44:07.307625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.456 [2024-10-01 13:44:07.307659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.456 [2024-10-01 13:44:07.307692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.456 [2024-10-01 13:44:07.307710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.456 [2024-10-01 13:44:07.307724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.456 [2024-10-01 13:44:07.307756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.456 [2024-10-01 13:44:07.310746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.456 [2024-10-01 13:44:07.310887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.456 [2024-10-01 13:44:07.310926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.456 [2024-10-01 13:44:07.310946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.456 [2024-10-01 13:44:07.310980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.456 [2024-10-01 13:44:07.311012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.456 [2024-10-01 13:44:07.311030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.456 [2024-10-01 13:44:07.311044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.456 [2024-10-01 13:44:07.311077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.456 [2024-10-01 13:44:07.318321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.456 [2024-10-01 13:44:07.318442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.456 [2024-10-01 13:44:07.318477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.456 [2024-10-01 13:44:07.318496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.456 [2024-10-01 13:44:07.318530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.456 [2024-10-01 13:44:07.318581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.456 [2024-10-01 13:44:07.318600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.456 [2024-10-01 13:44:07.318614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.456 [2024-10-01 13:44:07.318646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.456 [2024-10-01 13:44:07.321574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.456 [2024-10-01 13:44:07.321692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.456 [2024-10-01 13:44:07.321734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.456 [2024-10-01 13:44:07.321755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.456 [2024-10-01 13:44:07.321790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.456 [2024-10-01 13:44:07.321823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.456 [2024-10-01 13:44:07.321840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.456 [2024-10-01 13:44:07.321854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.456 [2024-10-01 13:44:07.321886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.456 [2024-10-01 13:44:07.328419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.456 [2024-10-01 13:44:07.328553] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.456 [2024-10-01 13:44:07.328587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.456 [2024-10-01 13:44:07.328605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.456 [2024-10-01 13:44:07.328639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.456 [2024-10-01 13:44:07.328689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.456 [2024-10-01 13:44:07.328709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.456 [2024-10-01 13:44:07.328724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.456 [2024-10-01 13:44:07.328986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.456 [2024-10-01 13:44:07.332477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.456 [2024-10-01 13:44:07.332608] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.456 [2024-10-01 13:44:07.332643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.456 [2024-10-01 13:44:07.332661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.456 [2024-10-01 13:44:07.332694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.456 [2024-10-01 13:44:07.332727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.456 [2024-10-01 13:44:07.332744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.456 [2024-10-01 13:44:07.332759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.456 [2024-10-01 13:44:07.332791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.456 [2024-10-01 13:44:07.338937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.456 [2024-10-01 13:44:07.339066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.456 [2024-10-01 13:44:07.339105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.456 [2024-10-01 13:44:07.339126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.456 [2024-10-01 13:44:07.339175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.456 [2024-10-01 13:44:07.339210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.456 [2024-10-01 13:44:07.339229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.456 [2024-10-01 13:44:07.339243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.456 [2024-10-01 13:44:07.339275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.456 [2024-10-01 13:44:07.342587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.456 [2024-10-01 13:44:07.342706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.456 [2024-10-01 13:44:07.342749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.456 [2024-10-01 13:44:07.342769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.456 [2024-10-01 13:44:07.342804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.456 [2024-10-01 13:44:07.342836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.456 [2024-10-01 13:44:07.342854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.456 [2024-10-01 13:44:07.342869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.456 [2024-10-01 13:44:07.343158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.456 [2024-10-01 13:44:07.349768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.457 [2024-10-01 13:44:07.349890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.457 [2024-10-01 13:44:07.349929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.457 [2024-10-01 13:44:07.349948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.457 [2024-10-01 13:44:07.349982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.457 [2024-10-01 13:44:07.350014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.457 [2024-10-01 13:44:07.350032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.457 [2024-10-01 13:44:07.350046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.457 [2024-10-01 13:44:07.350078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.457 [2024-10-01 13:44:07.353077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.457 [2024-10-01 13:44:07.353205] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.457 [2024-10-01 13:44:07.353239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.457 [2024-10-01 13:44:07.353258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.457 [2024-10-01 13:44:07.353292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.457 [2024-10-01 13:44:07.353324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.457 [2024-10-01 13:44:07.353341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.457 [2024-10-01 13:44:07.353355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.457 [2024-10-01 13:44:07.353388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.457 [2024-10-01 13:44:07.360690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.457 [2024-10-01 13:44:07.360811] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.457 [2024-10-01 13:44:07.360845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.457 [2024-10-01 13:44:07.360863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.457 [2024-10-01 13:44:07.360897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.457 [2024-10-01 13:44:07.360929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.457 [2024-10-01 13:44:07.360947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.457 [2024-10-01 13:44:07.360962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.457 [2024-10-01 13:44:07.360994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.457 [2024-10-01 13:44:07.363932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.457 [2024-10-01 13:44:07.364051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.457 [2024-10-01 13:44:07.364095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.457 [2024-10-01 13:44:07.364143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.457 [2024-10-01 13:44:07.364190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.457 [2024-10-01 13:44:07.364224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.457 [2024-10-01 13:44:07.364242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.457 [2024-10-01 13:44:07.364256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.457 [2024-10-01 13:44:07.364289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.457 [2024-10-01 13:44:07.370785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.457 [2024-10-01 13:44:07.370906] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.457 [2024-10-01 13:44:07.370949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.457 [2024-10-01 13:44:07.370970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.457 [2024-10-01 13:44:07.371005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.457 [2024-10-01 13:44:07.371037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.457 [2024-10-01 13:44:07.371056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.457 [2024-10-01 13:44:07.371070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.457 [2024-10-01 13:44:07.371101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.457 [2024-10-01 13:44:07.374892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.457 [2024-10-01 13:44:07.375011] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.457 [2024-10-01 13:44:07.375059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.457 [2024-10-01 13:44:07.375079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.457 [2024-10-01 13:44:07.375113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.457 [2024-10-01 13:44:07.375160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.457 [2024-10-01 13:44:07.375182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.457 [2024-10-01 13:44:07.375197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.457 [2024-10-01 13:44:07.375231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.457 [2024-10-01 13:44:07.381444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.457 [2024-10-01 13:44:07.381580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.457 [2024-10-01 13:44:07.381631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.457 [2024-10-01 13:44:07.381652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.457 [2024-10-01 13:44:07.381694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.457 [2024-10-01 13:44:07.381726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.457 [2024-10-01 13:44:07.381764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.457 [2024-10-01 13:44:07.381780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.457 [2024-10-01 13:44:07.381814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.457 [2024-10-01 13:44:07.384988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.457 [2024-10-01 13:44:07.385107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.457 [2024-10-01 13:44:07.385159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.457 [2024-10-01 13:44:07.385182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.457 [2024-10-01 13:44:07.385218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.457 [2024-10-01 13:44:07.385251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.457 [2024-10-01 13:44:07.385269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.457 [2024-10-01 13:44:07.385283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.457 [2024-10-01 13:44:07.385317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.457 [2024-10-01 13:44:07.392368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.457 [2024-10-01 13:44:07.392487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.457 [2024-10-01 13:44:07.392527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.457 [2024-10-01 13:44:07.392563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.457 [2024-10-01 13:44:07.392600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.457 [2024-10-01 13:44:07.392633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.457 [2024-10-01 13:44:07.392651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.457 [2024-10-01 13:44:07.392665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.457 [2024-10-01 13:44:07.392698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.457 [2024-10-01 13:44:07.395639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.457 [2024-10-01 13:44:07.395756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.457 [2024-10-01 13:44:07.395798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.457 [2024-10-01 13:44:07.395818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.457 [2024-10-01 13:44:07.395852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.457 [2024-10-01 13:44:07.395900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.457 [2024-10-01 13:44:07.395921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.457 [2024-10-01 13:44:07.395936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.457 [2024-10-01 13:44:07.395969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.457 [2024-10-01 13:44:07.403302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.457 [2024-10-01 13:44:07.403423] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.458 [2024-10-01 13:44:07.403457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.458 [2024-10-01 13:44:07.403476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.458 [2024-10-01 13:44:07.403510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.458 [2024-10-01 13:44:07.403560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.458 [2024-10-01 13:44:07.403582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.458 [2024-10-01 13:44:07.403597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.458 [2024-10-01 13:44:07.403629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.458 [2024-10-01 13:44:07.406509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.458 [2024-10-01 13:44:07.406640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.458 [2024-10-01 13:44:07.406691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.458 [2024-10-01 13:44:07.406712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.458 [2024-10-01 13:44:07.406745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.458 [2024-10-01 13:44:07.406778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.458 [2024-10-01 13:44:07.406797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.458 [2024-10-01 13:44:07.406811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.458 [2024-10-01 13:44:07.406843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.458 [2024-10-01 13:44:07.413401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.458 [2024-10-01 13:44:07.413521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.458 [2024-10-01 13:44:07.413577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.458 [2024-10-01 13:44:07.413599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.458 [2024-10-01 13:44:07.413634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.458 [2024-10-01 13:44:07.413667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.458 [2024-10-01 13:44:07.413685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.458 [2024-10-01 13:44:07.413699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.458 [2024-10-01 13:44:07.413731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.458 [2024-10-01 13:44:07.417449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.458 [2024-10-01 13:44:07.417582] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.458 [2024-10-01 13:44:07.417624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.458 [2024-10-01 13:44:07.417645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.458 [2024-10-01 13:44:07.417700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.458 [2024-10-01 13:44:07.417734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.458 [2024-10-01 13:44:07.417752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.458 [2024-10-01 13:44:07.417766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.458 [2024-10-01 13:44:07.417799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.458 [2024-10-01 13:44:07.424004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.458 [2024-10-01 13:44:07.424137] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.458 [2024-10-01 13:44:07.424188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.458 [2024-10-01 13:44:07.424210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.458 [2024-10-01 13:44:07.424246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.458 [2024-10-01 13:44:07.424279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.458 [2024-10-01 13:44:07.424298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.458 [2024-10-01 13:44:07.424312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.458 [2024-10-01 13:44:07.424344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.458 [2024-10-01 13:44:07.427562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.458 [2024-10-01 13:44:07.427681] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.458 [2024-10-01 13:44:07.427724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.458 [2024-10-01 13:44:07.427744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.458 [2024-10-01 13:44:07.427779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.458 [2024-10-01 13:44:07.427811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.458 [2024-10-01 13:44:07.427830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.458 [2024-10-01 13:44:07.427844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.458 [2024-10-01 13:44:07.427875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.458 [2024-10-01 13:44:07.434819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.458 [2024-10-01 13:44:07.434938] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.458 [2024-10-01 13:44:07.434977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.458 [2024-10-01 13:44:07.434997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.458 [2024-10-01 13:44:07.435031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.458 [2024-10-01 13:44:07.435064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.458 [2024-10-01 13:44:07.435081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.458 [2024-10-01 13:44:07.435114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.458 [2024-10-01 13:44:07.435167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.458 [2024-10-01 13:44:07.438128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.458 [2024-10-01 13:44:07.438258] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.458 [2024-10-01 13:44:07.438297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.458 [2024-10-01 13:44:07.438317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.458 [2024-10-01 13:44:07.438351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.458 [2024-10-01 13:44:07.438384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.458 [2024-10-01 13:44:07.438402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.458 [2024-10-01 13:44:07.438417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.458 [2024-10-01 13:44:07.438449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.458 [2024-10-01 13:44:07.445723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.458 [2024-10-01 13:44:07.445844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.458 [2024-10-01 13:44:07.445883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.458 [2024-10-01 13:44:07.445903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.458 [2024-10-01 13:44:07.445936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.458 [2024-10-01 13:44:07.445976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.458 [2024-10-01 13:44:07.445994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.458 [2024-10-01 13:44:07.446009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.459 [2024-10-01 13:44:07.446041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.459 [2024-10-01 13:44:07.448943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.459 [2024-10-01 13:44:07.449063] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.459 [2024-10-01 13:44:07.449105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.459 [2024-10-01 13:44:07.449128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.459 [2024-10-01 13:44:07.449176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.459 [2024-10-01 13:44:07.449211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.459 [2024-10-01 13:44:07.449228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.459 [2024-10-01 13:44:07.449242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.459 [2024-10-01 13:44:07.449276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.459 [2024-10-01 13:44:07.455827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.459 [2024-10-01 13:44:07.455980] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.459 [2024-10-01 13:44:07.456020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.459 [2024-10-01 13:44:07.456040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.459 [2024-10-01 13:44:07.456087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.459 [2024-10-01 13:44:07.456363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.459 [2024-10-01 13:44:07.456403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.459 [2024-10-01 13:44:07.456422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.459 [2024-10-01 13:44:07.456585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.459 [2024-10-01 13:44:07.459787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.459 [2024-10-01 13:44:07.459917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.459 [2024-10-01 13:44:07.459957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.459 [2024-10-01 13:44:07.459976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.459 [2024-10-01 13:44:07.460011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.459 [2024-10-01 13:44:07.460044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.459 [2024-10-01 13:44:07.460061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.459 [2024-10-01 13:44:07.460075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.459 [2024-10-01 13:44:07.460107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.459 [2024-10-01 13:44:07.466598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.459 [2024-10-01 13:44:07.467905] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.459 [2024-10-01 13:44:07.467972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.459 [2024-10-01 13:44:07.467997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.459 [2024-10-01 13:44:07.468265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.459 [2024-10-01 13:44:07.469472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.459 [2024-10-01 13:44:07.469517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.459 [2024-10-01 13:44:07.469549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.459 [2024-10-01 13:44:07.470232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.459 [2024-10-01 13:44:07.470761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.459 [2024-10-01 13:44:07.470981] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.459 [2024-10-01 13:44:07.471031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.459 [2024-10-01 13:44:07.471056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.459 [2024-10-01 13:44:07.471101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.459 [2024-10-01 13:44:07.471192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.459 [2024-10-01 13:44:07.471230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.459 [2024-10-01 13:44:07.471249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.459 [2024-10-01 13:44:07.471288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.459 [2024-10-01 13:44:07.477068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.459 [2024-10-01 13:44:07.477198] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.459 [2024-10-01 13:44:07.477242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.459 [2024-10-01 13:44:07.477263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.459 [2024-10-01 13:44:07.477298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.459 [2024-10-01 13:44:07.477331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.459 [2024-10-01 13:44:07.477349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.459 [2024-10-01 13:44:07.477364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.459 [2024-10-01 13:44:07.477396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.459 [2024-10-01 13:44:07.481989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.459 [2024-10-01 13:44:07.482108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.459 [2024-10-01 13:44:07.482159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.459 [2024-10-01 13:44:07.482183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.459 [2024-10-01 13:44:07.483263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.459 [2024-10-01 13:44:07.483940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.459 [2024-10-01 13:44:07.483980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.459 [2024-10-01 13:44:07.483998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.459 [2024-10-01 13:44:07.484105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.459 [2024-10-01 13:44:07.487929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.459 [2024-10-01 13:44:07.488048] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.459 [2024-10-01 13:44:07.488090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.459 [2024-10-01 13:44:07.488111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.459 [2024-10-01 13:44:07.488154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.459 [2024-10-01 13:44:07.488194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.459 [2024-10-01 13:44:07.488212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.459 [2024-10-01 13:44:07.488227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.459 [2024-10-01 13:44:07.488277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.459 [2024-10-01 13:44:07.493440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.459 [2024-10-01 13:44:07.494303] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.459 [2024-10-01 13:44:07.494350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.459 [2024-10-01 13:44:07.494372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.459 [2024-10-01 13:44:07.494584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.459 [2024-10-01 13:44:07.494682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.459 [2024-10-01 13:44:07.494706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.459 [2024-10-01 13:44:07.494720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.459 [2024-10-01 13:44:07.494755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.459 [2024-10-01 13:44:07.498026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.459 [2024-10-01 13:44:07.498155] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.459 [2024-10-01 13:44:07.498200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.459 [2024-10-01 13:44:07.498220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.459 [2024-10-01 13:44:07.498256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.459 [2024-10-01 13:44:07.498289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.459 [2024-10-01 13:44:07.498307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.459 [2024-10-01 13:44:07.498321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.459 [2024-10-01 13:44:07.498598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.459 [2024-10-01 13:44:07.503679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.460 [2024-10-01 13:44:07.503798] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.460 [2024-10-01 13:44:07.503836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.460 [2024-10-01 13:44:07.503855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.460 [2024-10-01 13:44:07.503902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.460 [2024-10-01 13:44:07.503938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.460 [2024-10-01 13:44:07.503955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.460 [2024-10-01 13:44:07.503970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.460 [2024-10-01 13:44:07.504003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.460 [2024-10-01 13:44:07.508508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.460 [2024-10-01 13:44:07.508645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.460 [2024-10-01 13:44:07.508679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.460 [2024-10-01 13:44:07.508715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.460 [2024-10-01 13:44:07.508751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.460 [2024-10-01 13:44:07.508784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.460 [2024-10-01 13:44:07.508802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.460 [2024-10-01 13:44:07.508816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.460 [2024-10-01 13:44:07.508848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.460 [2024-10-01 13:44:07.513776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.460 [2024-10-01 13:44:07.515201] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.460 [2024-10-01 13:44:07.515248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.460 [2024-10-01 13:44:07.515269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.460 [2024-10-01 13:44:07.516291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.460 [2024-10-01 13:44:07.516446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.460 [2024-10-01 13:44:07.516482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.460 [2024-10-01 13:44:07.516500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.460 [2024-10-01 13:44:07.516553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.460 [2024-10-01 13:44:07.519423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.460 [2024-10-01 13:44:07.519567] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.460 [2024-10-01 13:44:07.519604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.460 [2024-10-01 13:44:07.519623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.460 [2024-10-01 13:44:07.519659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.460 [2024-10-01 13:44:07.519692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.460 [2024-10-01 13:44:07.519710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.460 [2024-10-01 13:44:07.519725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.460 [2024-10-01 13:44:07.519758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.460 [2024-10-01 13:44:07.526573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.460 [2024-10-01 13:44:07.528031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.460 [2024-10-01 13:44:07.528089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.460 [2024-10-01 13:44:07.528116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.460 [2024-10-01 13:44:07.529169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.460 [2024-10-01 13:44:07.529427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.460 [2024-10-01 13:44:07.529490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.460 [2024-10-01 13:44:07.529513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.460 [2024-10-01 13:44:07.529675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.460 [2024-10-01 13:44:07.532438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.460 [2024-10-01 13:44:07.533583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.460 [2024-10-01 13:44:07.533637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.460 [2024-10-01 13:44:07.533663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.460 [2024-10-01 13:44:07.534970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.460 [2024-10-01 13:44:07.535235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.460 [2024-10-01 13:44:07.535278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.460 [2024-10-01 13:44:07.535299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.460 [2024-10-01 13:44:07.536690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.460 [2024-10-01 13:44:07.538032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.460 [2024-10-01 13:44:07.539059] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.460 [2024-10-01 13:44:07.539108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.460 [2024-10-01 13:44:07.539136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.460 [2024-10-01 13:44:07.540411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.460 [2024-10-01 13:44:07.540783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.460 [2024-10-01 13:44:07.540823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.460 [2024-10-01 13:44:07.540841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.460 [2024-10-01 13:44:07.540915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.460 [2024-10-01 13:44:07.542567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.460 [2024-10-01 13:44:07.542683] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.460 [2024-10-01 13:44:07.542726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.460 [2024-10-01 13:44:07.542747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.460 [2024-10-01 13:44:07.542781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.460 [2024-10-01 13:44:07.542813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.460 [2024-10-01 13:44:07.542831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.460 [2024-10-01 13:44:07.542845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.460 [2024-10-01 13:44:07.544087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.460 [2024-10-01 13:44:07.548738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.460 [2024-10-01 13:44:07.548890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.460 [2024-10-01 13:44:07.548926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.460 [2024-10-01 13:44:07.548945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.460 [2024-10-01 13:44:07.548980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.460 [2024-10-01 13:44:07.549013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.460 [2024-10-01 13:44:07.549031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.460 [2024-10-01 13:44:07.549047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.460 [2024-10-01 13:44:07.549080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.460 [2024-10-01 13:44:07.553839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.460 [2024-10-01 13:44:07.553993] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.460 [2024-10-01 13:44:07.554033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.460 [2024-10-01 13:44:07.554054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.460 [2024-10-01 13:44:07.554089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.460 [2024-10-01 13:44:07.555193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.460 [2024-10-01 13:44:07.555237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.460 [2024-10-01 13:44:07.555256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.460 [2024-10-01 13:44:07.555917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.460 [2024-10-01 13:44:07.559955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.460 [2024-10-01 13:44:07.560074] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.460 [2024-10-01 13:44:07.560107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.460 [2024-10-01 13:44:07.560126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.461 [2024-10-01 13:44:07.560159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.461 [2024-10-01 13:44:07.560192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.461 [2024-10-01 13:44:07.560209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.461 [2024-10-01 13:44:07.560224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.461 [2024-10-01 13:44:07.560257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.461 [2024-10-01 13:44:07.563951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.461 [2024-10-01 13:44:07.564065] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.461 [2024-10-01 13:44:07.564096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.461 [2024-10-01 13:44:07.564114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.461 [2024-10-01 13:44:07.564172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.461 [2024-10-01 13:44:07.564205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.461 [2024-10-01 13:44:07.564222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.461 [2024-10-01 13:44:07.564237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.461 [2024-10-01 13:44:07.564269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.461 [2024-10-01 13:44:07.570528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.461 [2024-10-01 13:44:07.570693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.461 [2024-10-01 13:44:07.570743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.461 [2024-10-01 13:44:07.570765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.461 [2024-10-01 13:44:07.570800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.461 [2024-10-01 13:44:07.570833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.461 [2024-10-01 13:44:07.570851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.461 [2024-10-01 13:44:07.570865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.461 [2024-10-01 13:44:07.570898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.461 [2024-10-01 13:44:07.574607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.461 [2024-10-01 13:44:07.574724] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.461 [2024-10-01 13:44:07.574758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.461 [2024-10-01 13:44:07.574776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.461 [2024-10-01 13:44:07.574814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.461 [2024-10-01 13:44:07.574848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.461 [2024-10-01 13:44:07.574865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.461 [2024-10-01 13:44:07.574879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.461 [2024-10-01 13:44:07.574911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.461 [2024-10-01 13:44:07.581508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.461 [2024-10-01 13:44:07.581639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.461 [2024-10-01 13:44:07.581682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.461 [2024-10-01 13:44:07.581702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.461 [2024-10-01 13:44:07.581736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.461 [2024-10-01 13:44:07.581769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.461 [2024-10-01 13:44:07.581787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.461 [2024-10-01 13:44:07.581819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.461 [2024-10-01 13:44:07.581855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.461 [2024-10-01 13:44:07.585051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.461 [2024-10-01 13:44:07.585173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.461 [2024-10-01 13:44:07.585215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.461 [2024-10-01 13:44:07.585236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.461 [2024-10-01 13:44:07.585271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.461 [2024-10-01 13:44:07.585304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.461 [2024-10-01 13:44:07.585322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.461 [2024-10-01 13:44:07.585336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.461 [2024-10-01 13:44:07.585369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.461 [2024-10-01 13:44:07.592634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.461 [2024-10-01 13:44:07.592752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.461 [2024-10-01 13:44:07.592785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.461 [2024-10-01 13:44:07.592803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.461 [2024-10-01 13:44:07.592837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.461 [2024-10-01 13:44:07.592869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.461 [2024-10-01 13:44:07.592887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.461 [2024-10-01 13:44:07.592901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.461 [2024-10-01 13:44:07.592933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.461 [2024-10-01 13:44:07.595151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.461 [2024-10-01 13:44:07.596024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.461 [2024-10-01 13:44:07.596069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.461 [2024-10-01 13:44:07.596090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.461 [2024-10-01 13:44:07.596271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.461 [2024-10-01 13:44:07.596367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.461 [2024-10-01 13:44:07.596393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.461 [2024-10-01 13:44:07.596409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.461 [2024-10-01 13:44:07.596442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.461 [2024-10-01 13:44:07.604017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.461 [2024-10-01 13:44:07.604251] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.461 [2024-10-01 13:44:07.604288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.461 [2024-10-01 13:44:07.604308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.461 [2024-10-01 13:44:07.604346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.461 [2024-10-01 13:44:07.604380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.461 [2024-10-01 13:44:07.604398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.461 [2024-10-01 13:44:07.604414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.461 [2024-10-01 13:44:07.604448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.461 [2024-10-01 13:44:07.605886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.461 [2024-10-01 13:44:07.606000] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.461 [2024-10-01 13:44:07.606046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.461 [2024-10-01 13:44:07.606066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.461 [2024-10-01 13:44:07.606099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.461 [2024-10-01 13:44:07.606130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.461 [2024-10-01 13:44:07.606148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.461 [2024-10-01 13:44:07.606162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.461 [2024-10-01 13:44:07.606194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.461 [2024-10-01 13:44:07.614421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.461 [2024-10-01 13:44:07.614606] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.461 [2024-10-01 13:44:07.614641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.461 [2024-10-01 13:44:07.614661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.461 [2024-10-01 13:44:07.614697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.461 [2024-10-01 13:44:07.614730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.462 [2024-10-01 13:44:07.614748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.462 [2024-10-01 13:44:07.614763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.462 [2024-10-01 13:44:07.614796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.462 [2024-10-01 13:44:07.615981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.462 [2024-10-01 13:44:07.616092] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.462 [2024-10-01 13:44:07.616134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.462 [2024-10-01 13:44:07.616154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.462 [2024-10-01 13:44:07.616187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.462 [2024-10-01 13:44:07.617552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.462 [2024-10-01 13:44:07.617591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.462 [2024-10-01 13:44:07.617609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.462 [2024-10-01 13:44:07.618515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.462 [2024-10-01 13:44:07.625097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.462 [2024-10-01 13:44:07.625213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.462 [2024-10-01 13:44:07.625256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.462 [2024-10-01 13:44:07.625276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.462 [2024-10-01 13:44:07.625310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.462 [2024-10-01 13:44:07.625342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.462 [2024-10-01 13:44:07.625359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.462 [2024-10-01 13:44:07.625373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.462 [2024-10-01 13:44:07.625405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.462 [2024-10-01 13:44:07.626743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.462 [2024-10-01 13:44:07.626856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.462 [2024-10-01 13:44:07.626887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.462 [2024-10-01 13:44:07.626905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.462 [2024-10-01 13:44:07.627984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.462 [2024-10-01 13:44:07.628647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.462 [2024-10-01 13:44:07.628686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.462 [2024-10-01 13:44:07.628704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.462 [2024-10-01 13:44:07.628791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.462 [2024-10-01 13:44:07.635900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.462 [2024-10-01 13:44:07.636015] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.462 [2024-10-01 13:44:07.636047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.462 [2024-10-01 13:44:07.636065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.462 [2024-10-01 13:44:07.636099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.462 [2024-10-01 13:44:07.636131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.462 [2024-10-01 13:44:07.636148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.462 [2024-10-01 13:44:07.636163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.462 [2024-10-01 13:44:07.636213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.462 [2024-10-01 13:44:07.636833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.462 [2024-10-01 13:44:07.638116] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.462 [2024-10-01 13:44:07.638160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.462 [2024-10-01 13:44:07.638180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.462 [2024-10-01 13:44:07.638397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.462 [2024-10-01 13:44:07.639180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.462 [2024-10-01 13:44:07.639218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.462 [2024-10-01 13:44:07.639236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.462 [2024-10-01 13:44:07.639443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.462 [2024-10-01 13:44:07.646778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.462 [2024-10-01 13:44:07.646896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.462 [2024-10-01 13:44:07.646937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.462 [2024-10-01 13:44:07.646957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.462 [2024-10-01 13:44:07.646993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.462 [2024-10-01 13:44:07.647039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.462 [2024-10-01 13:44:07.647060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.462 [2024-10-01 13:44:07.647075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.462 [2024-10-01 13:44:07.647108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.462 [2024-10-01 13:44:07.647132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.462 [2024-10-01 13:44:07.647209] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.462 [2024-10-01 13:44:07.647237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.462 [2024-10-01 13:44:07.647255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.462 [2024-10-01 13:44:07.647287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.462 [2024-10-01 13:44:07.647318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.462 [2024-10-01 13:44:07.647335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.462 [2024-10-01 13:44:07.647350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.462 [2024-10-01 13:44:07.647381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.462 [2024-10-01 13:44:07.656929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.462 [2024-10-01 13:44:07.657051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.462 [2024-10-01 13:44:07.657094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.462 [2024-10-01 13:44:07.657135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.462 [2024-10-01 13:44:07.657172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.462 [2024-10-01 13:44:07.657221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.462 [2024-10-01 13:44:07.657243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.462 [2024-10-01 13:44:07.657258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.462 [2024-10-01 13:44:07.657292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.462 [2024-10-01 13:44:07.657316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.462 [2024-10-01 13:44:07.657640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.462 [2024-10-01 13:44:07.657683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.462 [2024-10-01 13:44:07.657702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.463 [2024-10-01 13:44:07.657866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.463 [2024-10-01 13:44:07.658015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.463 [2024-10-01 13:44:07.658060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.463 [2024-10-01 13:44:07.658090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.463 [2024-10-01 13:44:07.658137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.463 8682.80 IOPS, 33.92 MiB/s [2024-10-01 13:44:07.668614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.463 [2024-10-01 13:44:07.668671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.463 [2024-10-01 13:44:07.669144] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.463 [2024-10-01 13:44:07.669191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.463 [2024-10-01 13:44:07.669213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.463 [2024-10-01 13:44:07.669285] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.463 [2024-10-01 13:44:07.669319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.463 [2024-10-01 13:44:07.669338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.463 [2024-10-01 13:44:07.669487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.463 [2024-10-01 13:44:07.669518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.463 [2024-10-01 13:44:07.669641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.463 [2024-10-01 13:44:07.669674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.463 [2024-10-01 13:44:07.669692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.463 [2024-10-01 13:44:07.669710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.463 [2024-10-01 13:44:07.669748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.463 [2024-10-01 13:44:07.669764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.463 [2024-10-01 13:44:07.669879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.463 [2024-10-01 13:44:07.669902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.463 00:16:17.463 Latency(us) 00:16:17.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.463 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:17.463 Verification LBA range: start 0x0 length 0x4000 00:16:17.463 NVMe0n1 : 15.01 8683.21 33.92 0.00 0.00 14706.96 2055.45 19184.17 00:16:17.463 =================================================================================================================== 00:16:17.463 Total : 8683.21 33.92 0.00 0.00 14706.96 2055.45 19184.17 00:16:17.463 [2024-10-01 13:44:07.678732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.463 [2024-10-01 13:44:07.678812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.463 [2024-10-01 13:44:07.678911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.463 [2024-10-01 13:44:07.678951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.463 [2024-10-01 13:44:07.678972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.463 [2024-10-01 13:44:07.679030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.463 [2024-10-01 13:44:07.679062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.463 [2024-10-01 13:44:07.679085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.463 [2024-10-01 13:44:07.679106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.463 [2024-10-01 13:44:07.679127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.463 [2024-10-01 13:44:07.679145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.463 [2024-10-01 13:44:07.679160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.463 [2024-10-01 13:44:07.679174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.463 [2024-10-01 13:44:07.679201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.463 [2024-10-01 13:44:07.679217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.463 [2024-10-01 13:44:07.679231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.463 [2024-10-01 13:44:07.679244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.463 [2024-10-01 13:44:07.679261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.463 [2024-10-01 13:44:07.688826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.463 [2024-10-01 13:44:07.688929] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.463 [2024-10-01 13:44:07.688960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.463 [2024-10-01 13:44:07.688978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.463 [2024-10-01 13:44:07.689034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.463 [2024-10-01 13:44:07.689061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.463 [2024-10-01 13:44:07.689089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.463 [2024-10-01 13:44:07.689106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.463 [2024-10-01 13:44:07.689120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.463 [2024-10-01 13:44:07.689137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.463 [2024-10-01 13:44:07.689193] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.463 [2024-10-01 13:44:07.689220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.463 [2024-10-01 13:44:07.689237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.463 [2024-10-01 13:44:07.689257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.463 [2024-10-01 13:44:07.689277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.463 [2024-10-01 13:44:07.689292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.463 [2024-10-01 13:44:07.689306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.463 [2024-10-01 13:44:07.689324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.463 [2024-10-01 13:44:07.698896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.463 [2024-10-01 13:44:07.698999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.463 [2024-10-01 13:44:07.699029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.463 [2024-10-01 13:44:07.699047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.463 [2024-10-01 13:44:07.699068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.463 [2024-10-01 13:44:07.699088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.463 [2024-10-01 13:44:07.699104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.463 [2024-10-01 13:44:07.699118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.463 [2024-10-01 13:44:07.699145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.463 [2024-10-01 13:44:07.699168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.463 [2024-10-01 13:44:07.699237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.463 [2024-10-01 13:44:07.699264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.463 [2024-10-01 13:44:07.699281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.463 [2024-10-01 13:44:07.699301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.463 [2024-10-01 13:44:07.699321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.463 [2024-10-01 13:44:07.699335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.464 [2024-10-01 13:44:07.699364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.464 [2024-10-01 13:44:07.699385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.464 [2024-10-01 13:44:07.708990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.464 [2024-10-01 13:44:07.709200] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.464 [2024-10-01 13:44:07.709236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.464 [2024-10-01 13:44:07.709264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.464 [2024-10-01 13:44:07.709299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.464 [2024-10-01 13:44:07.709335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.464 [2024-10-01 13:44:07.709355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.464 [2024-10-01 13:44:07.709372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.464 [2024-10-01 13:44:07.709401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.464 [2024-10-01 13:44:07.709425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.464 [2024-10-01 13:44:07.709492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.464 [2024-10-01 13:44:07.709520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.464 [2024-10-01 13:44:07.709554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.464 [2024-10-01 13:44:07.709578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.464 [2024-10-01 13:44:07.709599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.464 [2024-10-01 13:44:07.709614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.464 [2024-10-01 13:44:07.709628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.464 [2024-10-01 13:44:07.709647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.464 [2024-10-01 13:44:07.719124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.464 [2024-10-01 13:44:07.719315] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.464 [2024-10-01 13:44:07.719349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2d280 with addr=10.0.0.3, port=4421 00:16:17.464 [2024-10-01 13:44:07.719368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d280 is same with the state(6) to be set 00:16:17.464 [2024-10-01 13:44:07.719393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d280 (9): Bad file descriptor 00:16:17.464 [2024-10-01 13:44:07.719418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.464 [2024-10-01 13:44:07.719435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.464 [2024-10-01 13:44:07.719453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.464 [2024-10-01 13:44:07.719472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.464 [2024-10-01 13:44:07.719503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.464 [2024-10-01 13:44:07.719615] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.464 [2024-10-01 13:44:07.719644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb259a0 with addr=10.0.0.3, port=4422 00:16:17.464 [2024-10-01 13:44:07.719661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb259a0 is same with the state(6) to be set 00:16:17.464 [2024-10-01 13:44:07.719681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb259a0 (9): Bad file descriptor 00:16:17.464 [2024-10-01 13:44:07.719701] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.464 [2024-10-01 13:44:07.719716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.464 [2024-10-01 13:44:07.719730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.464 [2024-10-01 13:44:07.719749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.464 Received shutdown signal, test time was about 15.000000 seconds 00:16:17.464 00:16:17.464 Latency(us) 00:16:17.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.464 =================================================================================================================== 00:16:17.464 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:17.464 Process with pid 75429 is not found 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # killprocess 75429 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75429 ']' 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75429 00:16:17.464 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (75429) - No such process 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@977 -- # echo 'Process with pid 75429 is not found' 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # nvmftestfini 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:17.464 rmmod nvme_tcp 00:16:17.464 rmmod nvme_fabrics 00:16:17.464 rmmod nvme_keyring 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 75364 ']' 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 75364 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75364 ']' 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75364 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75364 00:16:17.464 killing process with pid 75364 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75364' 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75364 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75364 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:17.464 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # exit 1 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # trap - ERR 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # print_backtrace 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh' 'nvmf_failover' '--transport=tcp') 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # local args 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1157 -- # xtrace_disable 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:17.465 ========== Backtrace start: ========== 00:16:17.465 00:16:17.465 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_failover"],["/home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh"],["--transport=tcp"]) 00:16:17.465 ... 00:16:17.465 1120 timing_enter $test_name 00:16:17.465 1121 echo "************************************" 00:16:17.465 1122 echo "START TEST $test_name" 00:16:17.465 1123 echo "************************************" 00:16:17.465 1124 xtrace_restore 00:16:17.465 1125 time "$@" 00:16:17.465 1126 xtrace_disable 00:16:17.465 1127 echo "************************************" 00:16:17.465 1128 echo "END TEST $test_name" 00:16:17.465 1129 echo "************************************" 00:16:17.465 1130 timing_exit $test_name 00:16:17.465 ... 00:16:17.465 in /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh:25 -> main(["--transport=tcp"]) 00:16:17.465 ... 00:16:17.465 20 fi 00:16:17.465 21 00:16:17.465 22 run_test "nvmf_identify" $rootdir/test/nvmf/host/identify.sh "${TEST_ARGS[@]}" 00:16:17.465 23 run_test "nvmf_perf" $rootdir/test/nvmf/host/perf.sh "${TEST_ARGS[@]}" 00:16:17.465 24 run_test "nvmf_fio_host" $rootdir/test/nvmf/host/fio.sh "${TEST_ARGS[@]}" 00:16:17.465 => 25 run_test "nvmf_failover" $rootdir/test/nvmf/host/failover.sh "${TEST_ARGS[@]}" 00:16:17.465 26 run_test "nvmf_host_discovery" $rootdir/test/nvmf/host/discovery.sh "${TEST_ARGS[@]}" 00:16:17.465 27 run_test "nvmf_host_multipath_status" $rootdir/test/nvmf/host/multipath_status.sh "${TEST_ARGS[@]}" 00:16:17.465 28 run_test "nvmf_discovery_remove_ifc" $rootdir/test/nvmf/host/discovery_remove_ifc.sh "${TEST_ARGS[@]}" 00:16:17.465 29 run_test "nvmf_identify_kernel_target" "$rootdir/test/nvmf/host/identify_kernel_nvmf.sh" "${TEST_ARGS[@]}" 00:16:17.465 30 run_test "nvmf_auth_host" "$rootdir/test/nvmf/host/auth.sh" "${TEST_ARGS[@]}" 00:16:17.465 ... 00:16:17.465 00:16:17.465 ========== Backtrace end ========== 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1194 -- # return 0 00:16:17.465 00:16:17.465 real 0m22.106s 00:16:17.465 user 1m20.526s 00:16:17.465 sys 0m5.101s 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1 -- # exit 1 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # trap - ERR 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # print_backtrace 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh' 'nvmf_host' '--transport=tcp') 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # local args 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1157 -- # xtrace_disable 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.465 ========== Backtrace start: ========== 00:16:17.465 00:16:17.465 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_host"],["/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh"],["--transport=tcp"]) 00:16:17.465 ... 00:16:17.465 1120 timing_enter $test_name 00:16:17.465 1121 echo "************************************" 00:16:17.465 1122 echo "START TEST $test_name" 00:16:17.465 1123 echo "************************************" 00:16:17.465 1124 xtrace_restore 00:16:17.465 1125 time "$@" 00:16:17.465 1126 xtrace_disable 00:16:17.465 1127 echo "************************************" 00:16:17.465 1128 echo "END TEST $test_name" 00:16:17.465 1129 echo "************************************" 00:16:17.465 1130 timing_exit $test_name 00:16:17.465 ... 00:16:17.465 in /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh:16 -> main(["--transport=tcp"]) 00:16:17.465 ... 00:16:17.465 11 exit 0 00:16:17.465 12 fi 00:16:17.465 13 00:16:17.465 14 run_test "nvmf_target_core" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:16:17.465 15 run_test "nvmf_target_extra" $rootdir/test/nvmf/nvmf_target_extra.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:16:17.465 => 16 run_test "nvmf_host" $rootdir/test/nvmf/nvmf_host.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:16:17.465 17 00:16:17.465 18 # Interrupt mode for now is supported only on the target, with the TCP transport and posix or ssl socket implementations. 00:16:17.465 19 if [[ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" && $SPDK_TEST_URING -eq 0 ]]; then 00:16:17.465 20 run_test "nvmf_target_core_interrupt_mode" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:16:17.465 21 run_test "nvmf_interrupt" $rootdir/test/nvmf/target/interrupt.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:16:17.465 ... 00:16:17.465 00:16:17.465 ========== Backtrace end ========== 00:16:17.465 13:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1194 -- # return 0 00:16:17.465 00:16:17.465 real 0m49.415s 00:16:17.465 user 2m59.962s 00:16:17.465 sys 0m11.831s 00:16:17.465 13:44:08 nvmf_tcp -- common/autotest_common.sh@1125 -- # trap - ERR 00:16:17.465 13:44:08 nvmf_tcp -- common/autotest_common.sh@1125 -- # print_backtrace 00:16:17.465 13:44:08 nvmf_tcp -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:16:17.465 13:44:08 nvmf_tcp -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh' 'nvmf_tcp' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:16:17.465 13:44:08 nvmf_tcp -- common/autotest_common.sh@1155 -- # local args 00:16:17.465 13:44:08 nvmf_tcp -- common/autotest_common.sh@1157 -- # xtrace_disable 00:16:17.465 13:44:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:17.465 ========== Backtrace start: ========== 00:16:17.465 00:16:17.465 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_tcp"],["/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh"],["--transport=tcp"]) 00:16:17.465 ... 00:16:17.465 1120 timing_enter $test_name 00:16:17.465 1121 echo "************************************" 00:16:17.465 1122 echo "START TEST $test_name" 00:16:17.465 1123 echo "************************************" 00:16:17.465 1124 xtrace_restore 00:16:17.465 1125 time "$@" 00:16:17.465 1126 xtrace_disable 00:16:17.465 1127 echo "************************************" 00:16:17.465 1128 echo "END TEST $test_name" 00:16:17.465 1129 echo "************************************" 00:16:17.465 1130 timing_exit $test_name 00:16:17.465 ... 00:16:17.465 in /home/vagrant/spdk_repo/spdk/autotest.sh:280 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:16:17.465 ... 00:16:17.465 275 # list of all tests can properly differentiate them. Please do not merge them into one line. 00:16:17.465 276 if [ "$SPDK_TEST_NVMF_TRANSPORT" = "rdma" ]; then 00:16:17.465 277 run_test "nvmf_rdma" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:16:17.465 278 run_test "spdkcli_nvmf_rdma" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:16:17.465 279 elif [ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" ]; then 00:16:17.465 => 280 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:16:17.465 281 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:16:17.465 282 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:16:17.465 283 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:16:17.465 284 fi 00:16:17.465 285 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:16:17.465 ... 00:16:17.465 00:16:17.465 ========== Backtrace end ========== 00:16:17.465 13:44:08 nvmf_tcp -- common/autotest_common.sh@1194 -- # return 0 00:16:17.465 00:16:17.465 real 9m38.923s 00:16:17.465 user 22m44.036s 00:16:17.465 sys 2m17.998s 00:16:17.465 13:44:08 nvmf_tcp -- common/autotest_common.sh@1 -- # autotest_cleanup 00:16:17.465 13:44:08 nvmf_tcp -- common/autotest_common.sh@1392 -- # local autotest_es=1 00:16:17.465 13:44:08 nvmf_tcp -- common/autotest_common.sh@1393 -- # xtrace_disable 00:16:17.465 13:44:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:29.677 INFO: APP EXITING 00:16:29.677 INFO: killing all VMs 00:16:29.677 INFO: killing vhost app 00:16:29.677 INFO: EXIT DONE 00:16:29.677 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:29.677 Waiting for block devices as requested 00:16:29.677 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:29.677 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:29.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:29.935 Cleaning 00:16:29.935 Removing: /var/run/dpdk/spdk0/config 00:16:29.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:16:29.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:16:29.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:16:29.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:16:29.935 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:16:29.935 Removing: /var/run/dpdk/spdk0/hugepage_info 00:16:29.935 Removing: /var/run/dpdk/spdk1/config 00:16:29.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:16:29.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:16:29.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:16:29.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:16:29.935 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:16:29.935 Removing: /var/run/dpdk/spdk1/hugepage_info 00:16:29.935 Removing: /var/run/dpdk/spdk2/config 00:16:29.935 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:16:29.935 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:16:29.935 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:16:29.935 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:16:29.935 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:16:29.935 Removing: /var/run/dpdk/spdk2/hugepage_info 00:16:29.935 Removing: /var/run/dpdk/spdk3/config 00:16:29.935 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:16:29.935 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:16:29.935 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:16:29.935 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:16:29.935 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:16:30.192 Removing: /var/run/dpdk/spdk3/hugepage_info 00:16:30.192 Removing: /var/run/dpdk/spdk4/config 00:16:30.192 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:16:30.192 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:16:30.192 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:16:30.192 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:16:30.192 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:16:30.192 Removing: /var/run/dpdk/spdk4/hugepage_info 00:16:30.192 Removing: /dev/shm/nvmf_trace.0 00:16:30.192 Removing: /dev/shm/spdk_tgt_trace.pid56705 00:16:30.192 Removing: /var/run/dpdk/spdk0 00:16:30.192 Removing: /var/run/dpdk/spdk1 00:16:30.192 Removing: /var/run/dpdk/spdk2 00:16:30.192 Removing: /var/run/dpdk/spdk3 00:16:30.192 Removing: /var/run/dpdk/spdk4 00:16:30.192 Removing: /var/run/dpdk/spdk_pid56552 00:16:30.192 Removing: /var/run/dpdk/spdk_pid56705 00:16:30.192 Removing: /var/run/dpdk/spdk_pid56898 00:16:30.192 Removing: /var/run/dpdk/spdk_pid56979 00:16:30.192 Removing: /var/run/dpdk/spdk_pid56999 00:16:30.192 Removing: /var/run/dpdk/spdk_pid57109 00:16:30.192 Removing: /var/run/dpdk/spdk_pid57119 00:16:30.192 Removing: /var/run/dpdk/spdk_pid57253 00:16:30.192 Removing: /var/run/dpdk/spdk_pid57454 00:16:30.192 Removing: /var/run/dpdk/spdk_pid57604 00:16:30.192 Removing: /var/run/dpdk/spdk_pid57682 00:16:30.192 Removing: /var/run/dpdk/spdk_pid57753 00:16:30.192 Removing: /var/run/dpdk/spdk_pid57844 00:16:30.192 Removing: /var/run/dpdk/spdk_pid57916 00:16:30.192 Removing: /var/run/dpdk/spdk_pid57955 00:16:30.192 Removing: /var/run/dpdk/spdk_pid57985 00:16:30.192 Removing: /var/run/dpdk/spdk_pid58060 00:16:30.192 Removing: /var/run/dpdk/spdk_pid58165 00:16:30.192 Removing: /var/run/dpdk/spdk_pid58598 00:16:30.192 Removing: /var/run/dpdk/spdk_pid58637 00:16:30.192 Removing: /var/run/dpdk/spdk_pid58688 00:16:30.192 Removing: /var/run/dpdk/spdk_pid58691 00:16:30.192 Removing: /var/run/dpdk/spdk_pid58758 00:16:30.192 Removing: /var/run/dpdk/spdk_pid58767 00:16:30.192 Removing: /var/run/dpdk/spdk_pid58828 00:16:30.192 Removing: /var/run/dpdk/spdk_pid58837 00:16:30.192 Removing: /var/run/dpdk/spdk_pid58882 00:16:30.192 Removing: /var/run/dpdk/spdk_pid58893 00:16:30.192 Removing: /var/run/dpdk/spdk_pid58933 00:16:30.192 Removing: /var/run/dpdk/spdk_pid58951 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59081 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59117 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59199 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59533 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59545 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59582 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59595 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59611 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59630 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59643 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59659 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59678 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59697 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59707 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59726 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59745 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59760 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59779 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59793 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59808 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59833 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59841 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59862 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59887 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59906 00:16:30.192 Removing: /var/run/dpdk/spdk_pid59930 00:16:30.192 Removing: /var/run/dpdk/spdk_pid60002 00:16:30.192 Removing: /var/run/dpdk/spdk_pid60036 00:16:30.192 Removing: /var/run/dpdk/spdk_pid60040 00:16:30.192 Removing: /var/run/dpdk/spdk_pid60063 00:16:30.192 Removing: /var/run/dpdk/spdk_pid60078 00:16:30.192 Removing: /var/run/dpdk/spdk_pid60080 00:16:30.192 Removing: /var/run/dpdk/spdk_pid60128 00:16:30.192 Removing: /var/run/dpdk/spdk_pid60136 00:16:30.192 Removing: /var/run/dpdk/spdk_pid60170 00:16:30.192 Removing: /var/run/dpdk/spdk_pid60174 00:16:30.193 Removing: /var/run/dpdk/spdk_pid60188 00:16:30.193 Removing: /var/run/dpdk/spdk_pid60193 00:16:30.193 Removing: /var/run/dpdk/spdk_pid60197 00:16:30.193 Removing: /var/run/dpdk/spdk_pid60212 00:16:30.193 Removing: /var/run/dpdk/spdk_pid60216 00:16:30.193 Removing: /var/run/dpdk/spdk_pid60231 00:16:30.193 Removing: /var/run/dpdk/spdk_pid60254 00:16:30.193 Removing: /var/run/dpdk/spdk_pid60285 00:16:30.193 Removing: /var/run/dpdk/spdk_pid60290 00:16:30.193 Removing: /var/run/dpdk/spdk_pid60319 00:16:30.193 Removing: /var/run/dpdk/spdk_pid60328 00:16:30.193 Removing: /var/run/dpdk/spdk_pid60330 00:16:30.193 Removing: /var/run/dpdk/spdk_pid60376 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60382 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60414 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60418 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60431 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60433 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60445 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60448 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60456 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60463 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60545 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60582 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60696 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60730 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60769 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60789 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60806 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60820 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60852 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60873 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60946 00:16:30.450 Removing: /var/run/dpdk/spdk_pid60963 00:16:30.450 Removing: /var/run/dpdk/spdk_pid61007 00:16:30.450 Removing: /var/run/dpdk/spdk_pid61077 00:16:30.450 Removing: /var/run/dpdk/spdk_pid61133 00:16:30.450 Removing: /var/run/dpdk/spdk_pid61162 00:16:30.450 Removing: /var/run/dpdk/spdk_pid61256 00:16:30.450 Removing: /var/run/dpdk/spdk_pid61304 00:16:30.451 Removing: /var/run/dpdk/spdk_pid61337 00:16:30.451 Removing: /var/run/dpdk/spdk_pid61563 00:16:30.451 Removing: /var/run/dpdk/spdk_pid61655 00:16:30.451 Removing: /var/run/dpdk/spdk_pid61689 00:16:30.451 Removing: /var/run/dpdk/spdk_pid61713 00:16:30.451 Removing: /var/run/dpdk/spdk_pid61752 00:16:30.451 Removing: /var/run/dpdk/spdk_pid61780 00:16:30.451 Removing: /var/run/dpdk/spdk_pid61819 00:16:30.451 Removing: /var/run/dpdk/spdk_pid61845 00:16:30.451 Removing: /var/run/dpdk/spdk_pid62231 00:16:30.451 Removing: /var/run/dpdk/spdk_pid62271 00:16:30.451 Removing: /var/run/dpdk/spdk_pid62613 00:16:30.451 Removing: /var/run/dpdk/spdk_pid63079 00:16:30.451 Removing: /var/run/dpdk/spdk_pid63356 00:16:30.451 Removing: /var/run/dpdk/spdk_pid64210 00:16:30.451 Removing: /var/run/dpdk/spdk_pid65116 00:16:30.451 Removing: /var/run/dpdk/spdk_pid65239 00:16:30.451 Removing: /var/run/dpdk/spdk_pid65301 00:16:30.451 Removing: /var/run/dpdk/spdk_pid66707 00:16:30.451 Removing: /var/run/dpdk/spdk_pid67019 00:16:30.451 Removing: /var/run/dpdk/spdk_pid71410 00:16:30.451 Removing: /var/run/dpdk/spdk_pid71776 00:16:30.451 Removing: /var/run/dpdk/spdk_pid71886 00:16:30.451 Removing: /var/run/dpdk/spdk_pid72013 00:16:30.451 Removing: /var/run/dpdk/spdk_pid72047 00:16:30.451 Removing: /var/run/dpdk/spdk_pid72070 00:16:30.451 Removing: /var/run/dpdk/spdk_pid72097 00:16:30.451 Removing: /var/run/dpdk/spdk_pid72202 00:16:30.451 Removing: /var/run/dpdk/spdk_pid72334 00:16:30.451 Removing: /var/run/dpdk/spdk_pid72479 00:16:30.451 Removing: /var/run/dpdk/spdk_pid72553 00:16:30.451 Removing: /var/run/dpdk/spdk_pid72742 00:16:30.451 Removing: /var/run/dpdk/spdk_pid72832 00:16:30.451 Removing: /var/run/dpdk/spdk_pid72918 00:16:30.451 Removing: /var/run/dpdk/spdk_pid73263 00:16:30.451 Removing: /var/run/dpdk/spdk_pid73683 00:16:30.451 Removing: /var/run/dpdk/spdk_pid73684 00:16:30.451 Removing: /var/run/dpdk/spdk_pid73685 00:16:30.451 Removing: /var/run/dpdk/spdk_pid73954 00:16:30.451 Removing: /var/run/dpdk/spdk_pid74281 00:16:30.451 Removing: /var/run/dpdk/spdk_pid74283 00:16:30.451 Removing: /var/run/dpdk/spdk_pid74611 00:16:30.451 Removing: /var/run/dpdk/spdk_pid74632 00:16:30.451 Removing: /var/run/dpdk/spdk_pid74646 00:16:30.451 Removing: /var/run/dpdk/spdk_pid74673 00:16:30.451 Removing: /var/run/dpdk/spdk_pid74684 00:16:30.451 Removing: /var/run/dpdk/spdk_pid75039 00:16:30.451 Removing: /var/run/dpdk/spdk_pid75082 00:16:30.451 Removing: /var/run/dpdk/spdk_pid75429 00:16:30.451 Clean 00:16:38.562 13:44:29 nvmf_tcp -- common/autotest_common.sh@1451 -- # return 1 00:16:38.562 13:44:29 nvmf_tcp -- common/autotest_common.sh@1 -- # : 00:16:38.562 13:44:29 nvmf_tcp -- common/autotest_common.sh@1 -- # exit 1 00:16:38.573 [Pipeline] } 00:16:38.592 [Pipeline] // timeout 00:16:38.600 [Pipeline] } 00:16:38.615 [Pipeline] // stage 00:16:38.623 [Pipeline] } 00:16:38.626 ERROR: script returned exit code 1 00:16:38.627 Setting overall build result to FAILURE 00:16:38.640 [Pipeline] // catchError 00:16:38.650 [Pipeline] stage 00:16:38.653 [Pipeline] { (Stop VM) 00:16:38.664 [Pipeline] sh 00:16:38.940 + vagrant halt 00:16:43.124 ==> default: Halting domain... 00:16:48.400 [Pipeline] sh 00:16:48.679 + vagrant destroy -f 00:16:52.867 ==> default: Removing domain... 00:16:52.879 [Pipeline] sh 00:16:53.157 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:16:53.166 [Pipeline] } 00:16:53.180 [Pipeline] // stage 00:16:53.186 [Pipeline] } 00:16:53.200 [Pipeline] // dir 00:16:53.204 [Pipeline] } 00:16:53.217 [Pipeline] // wrap 00:16:53.222 [Pipeline] } 00:16:53.234 [Pipeline] // catchError 00:16:53.243 [Pipeline] stage 00:16:53.245 [Pipeline] { (Epilogue) 00:16:53.259 [Pipeline] sh 00:16:53.541 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:16:55.475 [Pipeline] catchError 00:16:55.477 [Pipeline] { 00:16:55.490 [Pipeline] sh 00:16:55.772 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:16:56.030 Artifacts sizes are good 00:16:56.038 [Pipeline] } 00:16:56.052 [Pipeline] // catchError 00:16:56.063 [Pipeline] archiveArtifacts 00:16:56.068 Archiving artifacts 00:16:56.375 [Pipeline] cleanWs 00:16:56.385 [WS-CLEANUP] Deleting project workspace... 00:16:56.385 [WS-CLEANUP] Deferred wipeout is used... 00:16:56.391 [WS-CLEANUP] done 00:16:56.393 [Pipeline] } 00:16:56.410 [Pipeline] // stage 00:16:56.415 [Pipeline] } 00:16:56.429 [Pipeline] // node 00:16:56.434 [Pipeline] End of Pipeline 00:16:56.470 Finished: FAILURE